We might want to break it up into smaller pieces (eg. tools in one
place, documents in another) but let's worry about that later.
Signed-off-by: Solomon Hykes <solomon@docker.com>
| 3 | 2 |
deleted file mode 100644 |
| ... | ... |
@@ -1,17 +0,0 @@ |
| 1 |
-# Docker Governance Advisory Board Meetings |
|
| 2 |
- |
|
| 3 |
-In the spirit of openness, Docker created a Governance Advisory Board, and committed to make all materials and notes from the meetings of this group public. |
|
| 4 |
-All output from the meetings should be considered proposals only, and are subject to the review and approval of the community and the project leadership. |
|
| 5 |
- |
|
| 6 |
-The materials from the first Docker Governance Advisory Board meeting, held on October 28, 2014, are available at |
|
| 7 |
-[Google Docs Folder](http://goo.gl/Alfj8r) |
|
| 8 |
- |
|
| 9 |
-These include: |
|
| 10 |
- |
|
| 11 |
-* First Meeting Notes |
|
| 12 |
-* DGAB Charter |
|
| 13 |
-* Presentation 1: Introductory Presentation, including State of The Project |
|
| 14 |
-* Presentation 2: Overall Contribution Structure/Docker Project Core Proposal |
|
| 15 |
-* Presentation 3: Long Term Roadmap/Statement of Direction |
|
| 16 |
- |
|
| 17 |
- |
| 5 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,130 +0,0 @@ |
| 1 |
-# The Docker Maintainer manual |
|
| 2 |
- |
|
| 3 |
-## Introduction |
|
| 4 |
- |
|
| 5 |
-Dear maintainer. Thank you for investing the time and energy to help |
|
| 6 |
-make Docker as useful as possible. Maintaining a project is difficult, |
|
| 7 |
-sometimes unrewarding work. Sure, you will get to contribute cool |
|
| 8 |
-features to the project. But most of your time will be spent reviewing, |
|
| 9 |
-cleaning up, documenting, answering questions, and justifying design |
|
| 10 |
-decisions - while everyone has all the fun! But remember - the quality |
|
| 11 |
-of the maintainers' work is what distinguishes the good projects from |
|
| 12 |
-the great. So please be proud of your work, even the unglamourous parts, |
|
| 13 |
-and encourage a culture of appreciation and respect for *every* aspect |
|
| 14 |
-of improving the project - not just the hot new features. |
|
| 15 |
- |
|
| 16 |
-This document is a manual for maintainers old and new. It explains what |
|
| 17 |
-is expected of maintainers, how they should work, and what tools are |
|
| 18 |
-available to them. |
|
| 19 |
- |
|
| 20 |
-This is a living document - if you see something out of date or missing, |
|
| 21 |
-speak up! |
|
| 22 |
- |
|
| 23 |
-## What is a maintainer's responsibility? |
|
| 24 |
- |
|
| 25 |
-It is every maintainer's responsibility to: |
|
| 26 |
- |
|
| 27 |
-1. Expose a clear road map for improving their component. |
|
| 28 |
-2. Deliver prompt feedback and decisions on pull requests. |
|
| 29 |
-3. Be available to anyone with questions, bug reports, criticism etc. |
|
| 30 |
- on their component. This includes IRC, GitHub requests and the mailing |
|
| 31 |
- list. |
|
| 32 |
-4. Make sure their component respects the philosophy, design and |
|
| 33 |
- road map of the project. |
|
| 34 |
- |
|
| 35 |
-## How are decisions made? |
|
| 36 |
- |
|
| 37 |
-Short answer: with pull requests to the Docker repository. |
|
| 38 |
- |
|
| 39 |
-Docker is an open-source project with an open design philosophy. This |
|
| 40 |
-means that the repository is the source of truth for EVERY aspect of the |
|
| 41 |
-project, including its philosophy, design, road map, and APIs. *If it's |
|
| 42 |
-part of the project, it's in the repo. If it's in the repo, it's part of |
|
| 43 |
-the project.* |
|
| 44 |
- |
|
| 45 |
-As a result, all decisions can be expressed as changes to the |
|
| 46 |
-repository. An implementation change is a change to the source code. An |
|
| 47 |
-API change is a change to the API specification. A philosophy change is |
|
| 48 |
-a change to the philosophy manifesto, and so on. |
|
| 49 |
- |
|
| 50 |
-All decisions affecting Docker, big and small, follow the same 3 steps: |
|
| 51 |
- |
|
| 52 |
-* Step 1: Open a pull request. Anyone can do this. |
|
| 53 |
- |
|
| 54 |
-* Step 2: Discuss the pull request. Anyone can do this. |
|
| 55 |
- |
|
| 56 |
-* Step 3: Accept (`LGTM`) or refuse a pull request. The relevant maintainers do |
|
| 57 |
-this (see below "Who decides what?") |
|
| 58 |
- + Accepting pull requests |
|
| 59 |
- - If the pull request appears to be ready to merge, give it a `LGTM`, which |
|
| 60 |
- stands for "Looks Good To Me". |
|
| 61 |
- - If the pull request has some small problems that need to be changed, make |
|
| 62 |
- a comment adressing the issues. |
|
| 63 |
- - If the changes needed to a PR are small, you can add a "LGTM once the |
|
| 64 |
- following comments are adressed..." this will reduce needless back and |
|
| 65 |
- forth. |
|
| 66 |
- - If the PR only needs a few changes before being merged, any MAINTAINER can |
|
| 67 |
- make a replacement PR that incorporates the existing commits and fixes the |
|
| 68 |
- problems before a fast track merge. |
|
| 69 |
- + Closing pull requests |
|
| 70 |
- - If a PR appears to be abandoned, after having attempted to contact the |
|
| 71 |
- original contributor, then a replacement PR may be made. Once the |
|
| 72 |
- replacement PR is made, any contributor may close the original one. |
|
| 73 |
- - If you are not sure if the pull request implements a good feature or you |
|
| 74 |
- do not understand the purpose of the PR, ask the contributor to provide |
|
| 75 |
- more documentation. If the contributor is not able to adequately explain |
|
| 76 |
- the purpose of the PR, the PR may be closed by any MAINTAINER. |
|
| 77 |
- - If a MAINTAINER feels that the pull request is sufficiently architecturally |
|
| 78 |
- flawed, or if the pull request needs significantly more design discussion |
|
| 79 |
- before being considered, the MAINTAINER should close the pull request with |
|
| 80 |
- a short explanation of what discussion still needs to be had. It is |
|
| 81 |
- important not to leave such pull requests open, as this will waste both the |
|
| 82 |
- MAINTAINER's time and the contributor's time. It is not good to string a |
|
| 83 |
- contributor on for weeks or months, having them make many changes to a PR |
|
| 84 |
- that will eventually be rejected. |
|
| 85 |
- |
|
| 86 |
-## Who decides what? |
|
| 87 |
- |
|
| 88 |
-All decisions are pull requests, and the relevant maintainers make |
|
| 89 |
-decisions by accepting or refusing pull requests. Review and acceptance |
|
| 90 |
-by anyone is denoted by adding a comment in the pull request: `LGTM`. |
|
| 91 |
-However, only currently listed `MAINTAINERS` are counted towards the |
|
| 92 |
-required majority. |
|
| 93 |
- |
|
| 94 |
-Docker follows the timeless, highly efficient and totally unfair system |
|
| 95 |
-known as [Benevolent dictator for |
|
| 96 |
-life](http://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life), with |
|
| 97 |
-yours truly, Solomon Hykes, in the role of BDFL. This means that all |
|
| 98 |
-decisions are made, by default, by Solomon. Since making every decision |
|
| 99 |
-myself would be highly un-scalable, in practice decisions are spread |
|
| 100 |
-across multiple maintainers. |
|
| 101 |
- |
|
| 102 |
-The relevant maintainers for a pull request can be worked out in 2 steps: |
|
| 103 |
- |
|
| 104 |
-* Step 1: Determine the subdirectories affected by the pull request. This |
|
| 105 |
- might be `src/registry`, `docs/source/api`, or any other part of the repo. |
|
| 106 |
- |
|
| 107 |
-* Step 2: Find the `MAINTAINERS` file which affects this directory. If the |
|
| 108 |
- directory itself does not have a `MAINTAINERS` file, work your way up |
|
| 109 |
- the repo hierarchy until you find one. |
|
| 110 |
- |
|
| 111 |
-There is also a `hacks/getmaintainers.sh` script that will print out the |
|
| 112 |
-maintainers for a specified directory. |
|
| 113 |
- |
|
| 114 |
-### I'm a maintainer, and I'm going on holiday |
|
| 115 |
- |
|
| 116 |
-Please let your co-maintainers and other contributors know by raising a pull |
|
| 117 |
-request that comments out your `MAINTAINERS` file entry using a `#`. |
|
| 118 |
- |
|
| 119 |
-### I'm a maintainer. Should I make pull requests too? |
|
| 120 |
- |
|
| 121 |
-Yes. Nobody should ever push to master directly. All changes should be |
|
| 122 |
-made through a pull request. |
|
| 123 |
- |
|
| 124 |
-### Who assigns maintainers? |
|
| 125 |
- |
|
| 126 |
-Solomon has final `LGTM` approval for all pull requests to `MAINTAINERS` files. |
|
| 127 |
- |
|
| 128 |
-### How is this process changed? |
|
| 129 |
- |
|
| 130 |
-Just like everything else: by making a pull request :) |
| 131 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,336 +0,0 @@ |
| 1 |
-# Dear Packager, |
|
| 2 |
- |
|
| 3 |
-If you are looking to make Docker available on your favorite software |
|
| 4 |
-distribution, this document is for you. It summarizes the requirements for |
|
| 5 |
-building and running the Docker client and the Docker daemon. |
|
| 6 |
- |
|
| 7 |
-## Getting Started |
|
| 8 |
- |
|
| 9 |
-We want to help you package Docker successfully. Before doing any packaging, a |
|
| 10 |
-good first step is to introduce yourself on the [docker-dev mailing |
|
| 11 |
-list](https://groups.google.com/d/forum/docker-dev), explain what you're trying |
|
| 12 |
-to achieve, and tell us how we can help. Don't worry, we don't bite! There might |
|
| 13 |
-even be someone already working on packaging for the same distro! |
|
| 14 |
- |
|
| 15 |
-You can also join the IRC channel - #docker and #docker-dev on Freenode are both |
|
| 16 |
-active and friendly. |
|
| 17 |
- |
|
| 18 |
-We like to refer to Tianon ("@tianon" on GitHub and "tianon" on IRC) as our
|
|
| 19 |
-"Packagers Relations", since he's always working to make sure our packagers have |
|
| 20 |
-a good, healthy upstream to work with (both in our communication and in our |
|
| 21 |
-build scripts). If you're having any kind of trouble, feel free to ping him |
|
| 22 |
-directly. He also likes to keep track of what distributions we have packagers |
|
| 23 |
-for, so feel free to reach out to him even just to say "Hi!" |
|
| 24 |
- |
|
| 25 |
-## Package Name |
|
| 26 |
- |
|
| 27 |
-If possible, your package should be called "docker". If that name is already |
|
| 28 |
-taken, a second choice is "lxc-docker", but with the caveat that "LXC" is now an |
|
| 29 |
-optional dependency (as noted below). Another possible choice is "docker.io". |
|
| 30 |
- |
|
| 31 |
-## Official Build vs Distro Build |
|
| 32 |
- |
|
| 33 |
-The Docker project maintains its own build and release toolchain. It is pretty |
|
| 34 |
-neat and entirely based on Docker (surprise!). This toolchain is the canonical |
|
| 35 |
-way to build Docker. We encourage you to give it a try, and if the circumstances |
|
| 36 |
-allow you to use it, we recommend that you do. |
|
| 37 |
- |
|
| 38 |
-You might not be able to use the official build toolchain - usually because your |
|
| 39 |
-distribution has a toolchain and packaging policy of its own. We get it! Your |
|
| 40 |
-house, your rules. The rest of this document should give you the information you |
|
| 41 |
-need to package Docker your way, without denaturing it in the process. |
|
| 42 |
- |
|
| 43 |
-## Build Dependencies |
|
| 44 |
- |
|
| 45 |
-To build Docker, you will need the following: |
|
| 46 |
- |
|
| 47 |
-* A recent version of git and mercurial |
|
| 48 |
-* Go version 1.3 or later |
|
| 49 |
-* A clean checkout of the source added to a valid [Go |
|
| 50 |
- workspace](http://golang.org/doc/code.html#Workspaces) under the path |
|
| 51 |
- *src/github.com/docker/docker* (unless you plan to use `AUTO_GOPATH`, |
|
| 52 |
- explained in more detail below). |
|
| 53 |
- |
|
| 54 |
-To build the Docker daemon, you will additionally need: |
|
| 55 |
- |
|
| 56 |
-* An amd64/x86_64 machine running Linux |
|
| 57 |
-* SQLite version 3.7.9 or later |
|
| 58 |
-* libdevmapper version 1.02.68-cvs (2012-01-26) or later from lvm2 version |
|
| 59 |
- 2.02.89 or later |
|
| 60 |
-* btrfs-progs version 3.8 or later (including commit e5cb128 from 2013-01-07) |
|
| 61 |
- for the necessary btrfs headers |
|
| 62 |
- |
|
| 63 |
-Be sure to also check out Docker's Dockerfile for the most up-to-date list of |
|
| 64 |
-these build-time dependencies. |
|
| 65 |
- |
|
| 66 |
-### Go Dependencies |
|
| 67 |
- |
|
| 68 |
-All Go dependencies are vendored under "./vendor". They are used by the official |
|
| 69 |
-build, so the source of truth for the current version of each dependency is |
|
| 70 |
-whatever is in "./vendor". |
|
| 71 |
- |
|
| 72 |
-To use the vendored dependencies, simply make sure the path to "./vendor" is |
|
| 73 |
-included in `GOPATH` (or use `AUTO_GOPATH`, as explained below). |
|
| 74 |
- |
|
| 75 |
-If you would rather (or must, due to distro policy) package these dependencies |
|
| 76 |
-yourself, take a look at "./hack/vendor.sh" for an easy-to-parse list of the |
|
| 77 |
-exact version for each. |
|
| 78 |
- |
|
| 79 |
-NOTE: if you're not able to package the exact version (to the exact commit) of a |
|
| 80 |
-given dependency, please get in touch so we can remediate! Who knows what |
|
| 81 |
-discrepancies can be caused by even the slightest deviation. We promise to do |
|
| 82 |
-our best to make everybody happy. |
|
| 83 |
- |
|
| 84 |
-## Stripping Binaries |
|
| 85 |
- |
|
| 86 |
-Please, please, please do not strip any compiled binaries. This is really |
|
| 87 |
-important. |
|
| 88 |
- |
|
| 89 |
-In our own testing, stripping the resulting binaries sometimes results in a |
|
| 90 |
-binary that appears to work, but more often causes random panics, segfaults, and |
|
| 91 |
-other issues. Even if the binary appears to work, please don't strip. |
|
| 92 |
- |
|
| 93 |
-See the following quotes from Dave Cheney, which explain this position better |
|
| 94 |
-from the upstream Golang perspective. |
|
| 95 |
- |
|
| 96 |
-### [go issue #5855, comment #3](https://code.google.com/p/go/issues/detail?id=5855#c3) |
|
| 97 |
- |
|
| 98 |
-> Super super important: Do not strip go binaries or archives. It isn't tested, |
|
| 99 |
-> often breaks, and doesn't work. |
|
| 100 |
- |
|
| 101 |
-### [launchpad golang issue #1200255, comment #8](https://bugs.launchpad.net/ubuntu/+source/golang/+bug/1200255/comments/8) |
|
| 102 |
- |
|
| 103 |
-> To quote myself: "Please do not strip Go binaries, it is not supported, not |
|
| 104 |
-> tested, is often broken, and doesn't do what you want" |
|
| 105 |
-> |
|
| 106 |
-> To unpack that a bit |
|
| 107 |
-> |
|
| 108 |
-> * not supported, as in, we don't support it, and recommend against it when |
|
| 109 |
-> asked |
|
| 110 |
-> * not tested, we don't test stripped binaries as part of the build CI process |
|
| 111 |
-> * is often broken, stripping a go binary will produce anywhere from no, to |
|
| 112 |
-> subtle, to outright execution failure, see above |
|
| 113 |
- |
|
| 114 |
-### [launchpad golang issue #1200255, comment #13](https://bugs.launchpad.net/ubuntu/+source/golang/+bug/1200255/comments/13) |
|
| 115 |
- |
|
| 116 |
-> To clarify my previous statements. |
|
| 117 |
-> |
|
| 118 |
-> * I do not disagree with the debian policy, it is there for a good reason |
|
| 119 |
-> * Having said that, it stripping Go binaries doesn't work, and nobody is |
|
| 120 |
-> looking at making it work, so there is that. |
|
| 121 |
-> |
|
| 122 |
-> Thanks for patching the build formula. |
|
| 123 |
- |
|
| 124 |
-## Building Docker |
|
| 125 |
- |
|
| 126 |
-Please use our build script ("./hack/make.sh") for all your compilation of
|
|
| 127 |
-Docker. If there's something you need that it isn't doing, or something it could |
|
| 128 |
-be doing to make your life as a packager easier, please get in touch with Tianon |
|
| 129 |
-and help us rectify the situation. Chances are good that other packagers have |
|
| 130 |
-probably run into the same problems and a fix might already be in the works, but |
|
| 131 |
-none of us will know for sure unless you harass Tianon about it. :) |
|
| 132 |
- |
|
| 133 |
-All the commands listed within this section should be run with the Docker source |
|
| 134 |
-checkout as the current working directory. |
|
| 135 |
- |
|
| 136 |
-### `AUTO_GOPATH` |
|
| 137 |
- |
|
| 138 |
-If you'd rather not be bothered with the hassles that setting up `GOPATH` |
|
| 139 |
-appropriately can be, and prefer to just get a "build that works", you should |
|
| 140 |
-add something similar to this to whatever script or process you're using to |
|
| 141 |
-build Docker: |
|
| 142 |
- |
|
| 143 |
-```bash |
|
| 144 |
-export AUTO_GOPATH=1 |
|
| 145 |
-``` |
|
| 146 |
- |
|
| 147 |
-This will cause the build scripts to set up a reasonable `GOPATH` that |
|
| 148 |
-automatically and properly includes both docker/docker from the local |
|
| 149 |
-directory, and the local "./vendor" directory as necessary. |
|
| 150 |
- |
|
| 151 |
-### `DOCKER_BUILDTAGS` |
|
| 152 |
- |
|
| 153 |
-If you're building a binary that may need to be used on platforms that include |
|
| 154 |
-AppArmor, you will need to set `DOCKER_BUILDTAGS` as follows: |
|
| 155 |
-```bash |
|
| 156 |
-export DOCKER_BUILDTAGS='apparmor' |
|
| 157 |
-``` |
|
| 158 |
- |
|
| 159 |
-If you're building a binary that may need to be used on platforms that include |
|
| 160 |
-SELinux, you will need to use the `selinux` build tag: |
|
| 161 |
-```bash |
|
| 162 |
-export DOCKER_BUILDTAGS='selinux' |
|
| 163 |
-``` |
|
| 164 |
- |
|
| 165 |
-If your version of btrfs-progs is < 3.16.1 (also called btrfs-tools), then you |
|
| 166 |
-will need the following tag to not check for btrfs version headers: |
|
| 167 |
-```bash |
|
| 168 |
-export DOCKER_BUILDTAGS='btrfs_noversion' |
|
| 169 |
-``` |
|
| 170 |
- |
|
| 171 |
-There are build tags for disabling graphdrivers as well. By default, support |
|
| 172 |
-for all graphdrivers are built in. |
|
| 173 |
- |
|
| 174 |
-To disable btrfs: |
|
| 175 |
-```bash |
|
| 176 |
-export DOCKER_BUILDTAGS='exclude_graphdriver_btrfs' |
|
| 177 |
-``` |
|
| 178 |
- |
|
| 179 |
-To disable devicemapper: |
|
| 180 |
-```bash |
|
| 181 |
-export DOCKER_BUILDTAGS='exclude_graphdriver_devicemapper' |
|
| 182 |
-``` |
|
| 183 |
- |
|
| 184 |
-To disable aufs: |
|
| 185 |
-```bash |
|
| 186 |
-export DOCKER_BUILDTAGS='exclude_graphdriver_aufs' |
|
| 187 |
-``` |
|
| 188 |
- |
|
| 189 |
-NOTE: if you need to set more than one build tag, space separate them: |
|
| 190 |
-```bash |
|
| 191 |
-export DOCKER_BUILDTAGS='apparmor selinux exclude_graphdriver_aufs' |
|
| 192 |
-``` |
|
| 193 |
- |
|
| 194 |
-### Static Daemon |
|
| 195 |
- |
|
| 196 |
-If it is feasible within the constraints of your distribution, you should |
|
| 197 |
-seriously consider packaging Docker as a single static binary. A good comparison |
|
| 198 |
-is Busybox, which is often packaged statically as a feature to enable mass |
|
| 199 |
-portability. Because of the unique way Docker operates, being similarly static |
|
| 200 |
-is a "feature". |
|
| 201 |
- |
|
| 202 |
-To build a static Docker daemon binary, run the following command (first |
|
| 203 |
-ensuring that all the necessary libraries are available in static form for |
|
| 204 |
-linking - see the "Build Dependencies" section above, and the relevant lines |
|
| 205 |
-within Docker's own Dockerfile that set up our official build environment): |
|
| 206 |
- |
|
| 207 |
-```bash |
|
| 208 |
-./hack/make.sh binary |
|
| 209 |
-``` |
|
| 210 |
- |
|
| 211 |
-This will create a static binary under |
|
| 212 |
-"./bundles/$VERSION/binary/docker-$VERSION", where "$VERSION" is the contents of |
|
| 213 |
-the file "./VERSION". This binary is usually installed somewhere like |
|
| 214 |
-"/usr/bin/docker". |
|
| 215 |
- |
|
| 216 |
-### Dynamic Daemon / Client-only Binary |
|
| 217 |
- |
|
| 218 |
-If you are only interested in a Docker client binary, set `DOCKER_CLIENTONLY` to a non-empty value using something similar to the following: (which will prevent the extra step of compiling dockerinit) |
|
| 219 |
- |
|
| 220 |
-```bash |
|
| 221 |
-export DOCKER_CLIENTONLY=1 |
|
| 222 |
-``` |
|
| 223 |
- |
|
| 224 |
-If you need to (due to distro policy, distro library availability, or for other |
|
| 225 |
-reasons) create a dynamically compiled daemon binary, or if you are only |
|
| 226 |
-interested in creating a client binary for Docker, use something similar to the |
|
| 227 |
-following: |
|
| 228 |
- |
|
| 229 |
-```bash |
|
| 230 |
-./hack/make.sh dynbinary |
|
| 231 |
-``` |
|
| 232 |
- |
|
| 233 |
-This will create "./bundles/$VERSION/dynbinary/docker-$VERSION", which for |
|
| 234 |
-client-only builds is the important file to grab and install as appropriate. |
|
| 235 |
- |
|
| 236 |
-For daemon builds, you will also need to grab and install |
|
| 237 |
-"./bundles/$VERSION/dynbinary/dockerinit-$VERSION", which is created from the |
|
| 238 |
-minimal set of Docker's codebase that _must_ be compiled statically (and is thus |
|
| 239 |
-a pure static binary). The acceptable locations Docker will search for this file |
|
| 240 |
-are as follows (in order): |
|
| 241 |
- |
|
| 242 |
-* as "dockerinit" in the same directory as the daemon binary (ie, if docker is |
|
| 243 |
- installed at "/usr/bin/docker", then "/usr/bin/dockerinit" will be the first |
|
| 244 |
- place this file is searched for) |
|
| 245 |
-* "/usr/libexec/docker/dockerinit" or "/usr/local/libexec/docker/dockerinit" |
|
| 246 |
- ([FHS 3.0 Draft](http://www.linuxbase.org/betaspecs/fhs/fhs.html#usrlibexec)) |
|
| 247 |
-* "/usr/lib/docker/dockerinit" or "/usr/local/lib/docker/dockerinit" ([FHS |
|
| 248 |
- 2.3](http://refspecs.linuxfoundation.org/FHS_2.3/fhs-2.3.html#USRLIBLIBRARIESFORPROGRAMMINGANDPA)) |
|
| 249 |
- |
|
| 250 |
-If (and please, only if) one of the paths above is insufficient due to distro |
|
| 251 |
-policy or similar issues, you may use the `DOCKER_INITPATH` environment variable |
|
| 252 |
-at compile-time as follows to set a different path for Docker to search: |
|
| 253 |
- |
|
| 254 |
-```bash |
|
| 255 |
-export DOCKER_INITPATH=/usr/lib/docker.io/dockerinit |
|
| 256 |
-``` |
|
| 257 |
- |
|
| 258 |
-If you find yourself needing this, please don't hesitate to reach out to Tianon |
|
| 259 |
-to see if it would be reasonable or helpful to add more paths to Docker's list, |
|
| 260 |
-especially if there's a relevant standard worth referencing (such as the FHS). |
|
| 261 |
- |
|
| 262 |
-Also, it goes without saying, but for the purposes of the daemon please consider |
|
| 263 |
-these two binaries ("docker" and "dockerinit") as if they were a single unit.
|
|
| 264 |
-Mixing and matching can cause undesired consequences, and will fail to run |
|
| 265 |
-properly. |
|
| 266 |
- |
|
| 267 |
-## System Dependencies |
|
| 268 |
- |
|
| 269 |
-### Runtime Dependencies |
|
| 270 |
- |
|
| 271 |
-To function properly, the Docker daemon needs the following software to be |
|
| 272 |
-installed and available at runtime: |
|
| 273 |
- |
|
| 274 |
-* iptables version 1.4 or later |
|
| 275 |
-* procps (or similar provider of a "ps" executable) |
|
| 276 |
-* e2fsprogs version 1.4.12 or later (in use: mkfs.ext4, mkfs.xfs, tune2fs) |
|
| 277 |
-* XZ Utils version 4.9 or later |
|
| 278 |
-* a [properly |
|
| 279 |
- mounted](https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount) |
|
| 280 |
- cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount point |
|
| 281 |
- [is](https://github.com/docker/docker/issues/2683) |
|
| 282 |
- [not](https://github.com/docker/docker/issues/3485) |
|
| 283 |
- [sufficient](https://github.com/docker/docker/issues/4568)) |
|
| 284 |
- |
|
| 285 |
-Additionally, the Docker client needs the following software to be installed and |
|
| 286 |
-available at runtime: |
|
| 287 |
- |
|
| 288 |
-* Git version 1.7 or later |
|
| 289 |
- |
|
| 290 |
-### Kernel Requirements |
|
| 291 |
- |
|
| 292 |
-The Docker daemon has very specific kernel requirements. Most pre-packaged |
|
| 293 |
-kernels already include the necessary options enabled. If you are building your |
|
| 294 |
-own kernel, you will either need to discover the options necessary via trial and |
|
| 295 |
-error, or check out the [Gentoo |
|
| 296 |
-ebuild](https://github.com/tianon/docker-overlay/blob/master/app-emulation/docker/docker-9999.ebuild), |
|
| 297 |
-in which a list is maintained (and if there are any issues or discrepancies in |
|
| 298 |
-that list, please contact Tianon so they can be rectified). |
|
| 299 |
- |
|
| 300 |
-Note that in client mode, there are no specific kernel requirements, and that |
|
| 301 |
-the client will even run on alternative platforms such as Mac OS X / Darwin. |
|
| 302 |
- |
|
| 303 |
-### Optional Dependencies |
|
| 304 |
- |
|
| 305 |
-Some of Docker's features are activated by using optional command-line flags or |
|
| 306 |
-by having support for them in the kernel or userspace. A few examples include: |
|
| 307 |
- |
|
| 308 |
-* LXC execution driver (requires version 1.0 or later of the LXC utility scripts) |
|
| 309 |
-* AUFS graph driver (requires AUFS patches/support enabled in the kernel, and at |
|
| 310 |
- least the "auplink" utility from aufs-tools) |
|
| 311 |
-* BTRFS graph driver (requires BTRFS support enabled in the kernel) |
|
| 312 |
- |
|
| 313 |
-## Daemon Init Script |
|
| 314 |
- |
|
| 315 |
-Docker expects to run as a daemon at machine startup. Your package will need to |
|
| 316 |
-include a script for your distro's process supervisor of choice. Be sure to |
|
| 317 |
-check out the "contrib/init" folder in case a suitable init script already |
|
| 318 |
-exists (and if one does not, contact Tianon about whether it might be |
|
| 319 |
-appropriate for your distro's init script to live there too!). |
|
| 320 |
- |
|
| 321 |
-In general, Docker should be run as root, similar to the following: |
|
| 322 |
- |
|
| 323 |
-```bash |
|
| 324 |
-docker -d |
|
| 325 |
-``` |
|
| 326 |
- |
|
| 327 |
-Generally, a `DOCKER_OPTS` variable of some kind is available for adding more |
|
| 328 |
-flags (such as changing the graph driver to use BTRFS, switching the location of |
|
| 329 |
-"/var/lib/docker", etc). |
|
| 330 |
- |
|
| 331 |
-## Communicate |
|
| 332 |
- |
|
| 333 |
-As a final note, please do feel free to reach out to Tianon at any time for |
|
| 334 |
-pretty much anything. He really does love hearing from our packagers and wants |
|
| 335 |
-to make sure we're not being a "hostile upstream". As should be a given, we |
|
| 336 |
-appreciate the work our packagers do to make sure we have broad distribution! |
| 337 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,19 +0,0 @@ |
| 1 |
-# Docker principles |
|
| 2 |
- |
|
| 3 |
-In the design and development of Docker we try to follow these principles: |
|
| 4 |
- |
|
| 5 |
-(Work in progress) |
|
| 6 |
- |
|
| 7 |
-* Don't try to replace every tool. Instead, be an ingredient to improve them. |
|
| 8 |
-* Less code is better. |
|
| 9 |
-* Less components is better. Do you really need to add one more class? |
|
| 10 |
-* 50 lines of straightforward, readable code is better than 10 lines of magic that nobody can understand. |
|
| 11 |
-* Don't do later what you can do now. "//FIXME: refactor" is not acceptable in new code. |
|
| 12 |
-* When hesitating between 2 options, choose the one that is easier to reverse. |
|
| 13 |
-* No is temporary, Yes is forever. If you're not sure about a new feature, say no. You can change your mind later. |
|
| 14 |
-* Containers must be portable to the greatest possible number of machines. Be suspicious of any change which makes machines less interchangeable. |
|
| 15 |
-* The less moving parts in a container, the better. |
|
| 16 |
-* Don't merge it unless you document it. |
|
| 17 |
-* Don't document it unless you can keep it up-to-date. |
|
| 18 |
-* Don't merge it unless you test it! |
|
| 19 |
-* Everyone's problem is slightly different. Focus on the part that is the same for everyone, and solve that. |
| 20 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,24 +0,0 @@ |
| 1 |
-# Hacking on Docker |
|
| 2 |
- |
|
| 3 |
-The hack/ directory holds information and tools for everyone involved in the process of creating and |
|
| 4 |
-distributing Docker, specifically: |
|
| 5 |
- |
|
| 6 |
-## Guides |
|
| 7 |
- |
|
| 8 |
-If you're a *contributor* or aspiring contributor, you should read CONTRIBUTORS.md. |
|
| 9 |
- |
|
| 10 |
-If you're a *maintainer* or aspiring maintainer, you should read MAINTAINERS.md. |
|
| 11 |
- |
|
| 12 |
-If you're a *packager* or aspiring packager, you should read PACKAGERS.md. |
|
| 13 |
- |
|
| 14 |
-If you're a maintainer in charge of a *release*, you should read RELEASE-CHECKLIST.md. |
|
| 15 |
- |
|
| 16 |
-## Roadmap |
|
| 17 |
- |
|
| 18 |
-A high-level roadmap is available at ROADMAP.md. |
|
| 19 |
- |
|
| 20 |
- |
|
| 21 |
-## Build tools |
|
| 22 |
- |
|
| 23 |
-make.sh is the primary build tool for docker. It is used for compiling the official binary, |
|
| 24 |
-running the test suite, and pushing releases. |
| 25 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,303 +0,0 @@ |
| 1 |
-# Release Checklist |
|
| 2 |
-## A maintainer's guide to releasing Docker |
|
| 3 |
- |
|
| 4 |
-So you're in charge of a Docker release? Cool. Here's what to do. |
|
| 5 |
- |
|
| 6 |
-If your experience deviates from this document, please document the changes |
|
| 7 |
-to keep it up-to-date. |
|
| 8 |
- |
|
| 9 |
-It is important to note that this document assumes that the git remote in your |
|
| 10 |
-repository that corresponds to "https://github.com/docker/docker" is named |
|
| 11 |
-"origin". If yours is not (for example, if you've chosen to name it "upstream" |
|
| 12 |
-or something similar instead), be sure to adjust the listed snippets for your |
|
| 13 |
-local environment accordingly. If you are not sure what your upstream remote is |
|
| 14 |
-named, use a command like `git remote -v` to find out. |
|
| 15 |
- |
|
| 16 |
-If you don't have an upstream remote, you can add one easily using something |
|
| 17 |
-like: |
|
| 18 |
- |
|
| 19 |
-```bash |
|
| 20 |
-export GITHUBUSER="YOUR_GITHUB_USER" |
|
| 21 |
-git remote add origin https://github.com/docker/docker.git |
|
| 22 |
-git remote add $GITHUBUSER git@github.com:$GITHUBUSER/docker.git |
|
| 23 |
-``` |
|
| 24 |
- |
|
| 25 |
-### 1. Pull from master and create a release branch |
|
| 26 |
- |
|
| 27 |
-Note: Even for major releases, all of X, Y and Z in vX.Y.Z must be specified (e.g. v1.0.0). |
|
| 28 |
- |
|
| 29 |
-```bash |
|
| 30 |
-export VERSION=vX.Y.Z |
|
| 31 |
-git fetch origin |
|
| 32 |
-git branch -D release || true |
|
| 33 |
-git checkout --track origin/release |
|
| 34 |
-git checkout -b bump_$VERSION |
|
| 35 |
-``` |
|
| 36 |
- |
|
| 37 |
-If it's a regular release, we usually merge master. |
|
| 38 |
-```bash |
|
| 39 |
-git merge origin/master |
|
| 40 |
-``` |
|
| 41 |
- |
|
| 42 |
-Otherwise, if it is a hotfix release, we cherry-pick only the commits we want. |
|
| 43 |
-```bash |
|
| 44 |
-# get the commits ids we want to cherry-pick |
|
| 45 |
-git log |
|
| 46 |
-# cherry-pick the commits starting from the oldest one, without including merge commits |
|
| 47 |
-git cherry-pick <commit-id> |
|
| 48 |
-git cherry-pick <commit-id> |
|
| 49 |
-... |
|
| 50 |
-``` |
|
| 51 |
- |
|
| 52 |
-### 2. Update CHANGELOG.md |
|
| 53 |
- |
|
| 54 |
-You can run this command for reference with git 2.0: |
|
| 55 |
- |
|
| 56 |
-```bash |
|
| 57 |
-git fetch --tags |
|
| 58 |
-LAST_VERSION=$(git tag -l --sort=-version:refname "v*" | grep -E 'v[0-9\.]+$' | head -1) |
|
| 59 |
-git log --stat $LAST_VERSION..bump_$VERSION |
|
| 60 |
-``` |
|
| 61 |
- |
|
| 62 |
-If you don't have git 2.0 but have a sort command that supports `-V`: |
|
| 63 |
-```bash |
|
| 64 |
-git fetch --tags |
|
| 65 |
-LAST_VERSION=$(git tag -l | grep -E 'v[0-9\.]+$' | sort -rV | head -1) |
|
| 66 |
-git log --stat $LAST_VERSION..bump_$VERSION |
|
| 67 |
-``` |
|
| 68 |
- |
|
| 69 |
-If releasing a major version (X or Y increased in vX.Y.Z), simply listing notable user-facing features is sufficient. |
|
| 70 |
-```markdown |
|
| 71 |
-#### Notable features since <last major version> |
|
| 72 |
-* New docker command to do something useful |
|
| 73 |
-* Remote API change (deprecating old version) |
|
| 74 |
-* Performance improvements in some usecases |
|
| 75 |
-* ... |
|
| 76 |
-``` |
|
| 77 |
- |
|
| 78 |
-For minor releases (only Z increases in vX.Y.Z), provide a list of user-facing changes. |
|
| 79 |
-Each change should be listed under a category heading formatted as `#### CATEGORY`. |
|
| 80 |
- |
|
| 81 |
-`CATEGORY` should describe which part of the project is affected. |
|
| 82 |
- Valid categories are: |
|
| 83 |
- * Builder |
|
| 84 |
- * Documentation |
|
| 85 |
- * Hack |
|
| 86 |
- * Packaging |
|
| 87 |
- * Remote API |
|
| 88 |
- * Runtime |
|
| 89 |
- * Other (please use this category sparingly) |
|
| 90 |
- |
|
| 91 |
-Each change should be formatted as `BULLET DESCRIPTION`, given: |
|
| 92 |
- |
|
| 93 |
-* BULLET: either `-`, `+` or `*`, to indicate a bugfix, new feature or |
|
| 94 |
- upgrade, respectively. |
|
| 95 |
- |
|
| 96 |
-* DESCRIPTION: a concise description of the change that is relevant to the |
|
| 97 |
- end-user, using the present tense. Changes should be described in terms |
|
| 98 |
- of how they affect the user, for example "Add new feature X which allows Y", |
|
| 99 |
- "Fix bug which caused X", "Increase performance of Y". |
|
| 100 |
- |
|
| 101 |
-EXAMPLES: |
|
| 102 |
- |
|
| 103 |
-```markdown |
|
| 104 |
-## 0.3.6 (1995-12-25) |
|
| 105 |
- |
|
| 106 |
-#### Builder |
|
| 107 |
- |
|
| 108 |
-+ 'docker build -t FOO .' applies the tag FOO to the newly built image |
|
| 109 |
- |
|
| 110 |
-#### Remote API |
|
| 111 |
- |
|
| 112 |
-- Fix a bug in the optional unix socket transport |
|
| 113 |
- |
|
| 114 |
-#### Runtime |
|
| 115 |
- |
|
| 116 |
-* Improve detection of kernel version |
|
| 117 |
-``` |
|
| 118 |
- |
|
| 119 |
-If you need a list of contributors between the last major release and the |
|
| 120 |
-current bump branch, use something like: |
|
| 121 |
-```bash |
|
| 122 |
-git log --format='%aN <%aE>' v0.7.0...bump_v0.8.0 | sort -uf |
|
| 123 |
-``` |
|
| 124 |
-Obviously, you'll need to adjust version numbers as necessary. If you just need |
|
| 125 |
-a count, add a simple `| wc -l`. |
|
| 126 |
- |
|
| 127 |
-### 3. Change the contents of the VERSION file |
|
| 128 |
- |
|
| 129 |
-```bash |
|
| 130 |
-echo ${VERSION#v} > VERSION
|
|
| 131 |
-``` |
|
| 132 |
- |
|
| 133 |
-### 4. Test the docs |
|
| 134 |
- |
|
| 135 |
-Make sure that your tree includes documentation for any modified or |
|
| 136 |
-new features, syntax or semantic changes. |
|
| 137 |
- |
|
| 138 |
-To test locally: |
|
| 139 |
- |
|
| 140 |
-```bash |
|
| 141 |
-make docs |
|
| 142 |
-``` |
|
| 143 |
- |
|
| 144 |
-To make a shared test at http://beta-docs.docker.io: |
|
| 145 |
- |
|
| 146 |
-(You will need the `awsconfig` file added to the `docs/` dir) |
|
| 147 |
- |
|
| 148 |
-```bash |
|
| 149 |
-make AWS_S3_BUCKET=beta-docs.docker.io BUILD_ROOT=yes docs-release |
|
| 150 |
-``` |
|
| 151 |
- |
|
| 152 |
-### 5. Commit and create a pull request to the "release" branch |
|
| 153 |
- |
|
| 154 |
-```bash |
|
| 155 |
-git add VERSION CHANGELOG.md |
|
| 156 |
-git commit -m "Bump version to $VERSION" |
|
| 157 |
-git push $GITHUBUSER bump_$VERSION |
|
| 158 |
-echo "https://github.com/$GITHUBUSER/docker/compare/docker:release...$GITHUBUSER:bump_$VERSION?expand=1" |
|
| 159 |
-``` |
|
| 160 |
- |
|
| 161 |
-That last command will give you the proper link to visit to ensure that you |
|
| 162 |
-open the PR against the "release" branch instead of accidentally against |
|
| 163 |
-"master" (like so many brave souls before you already have). |
|
| 164 |
- |
|
| 165 |
-### 6. Get 2 other maintainers to validate the pull request |
|
| 166 |
- |
|
| 167 |
-### 7. Publish binaries |
|
| 168 |
- |
|
| 169 |
-To run this you will need access to the release credentials. Get them from the Core maintainers. |
|
| 170 |
- |
|
| 171 |
-Replace "..." with the respective credentials: |
|
| 172 |
- |
|
| 173 |
-```bash |
|
| 174 |
-docker build -t docker . |
|
| 175 |
-docker run \ |
|
| 176 |
- -e AWS_S3_BUCKET=test.docker.com \ |
|
| 177 |
- -e AWS_ACCESS_KEY="..." \ |
|
| 178 |
- -e AWS_SECRET_KEY="..." \ |
|
| 179 |
- -e GPG_PASSPHRASE="..." \ |
|
| 180 |
- -i -t --privileged \ |
|
| 181 |
- docker \ |
|
| 182 |
- hack/release.sh |
|
| 183 |
-``` |
|
| 184 |
- |
|
| 185 |
-It will run the test suite, build the binaries and packages, |
|
| 186 |
-and upload to the specified bucket (you should use test.docker.com for |
|
| 187 |
-general testing, and once everything is fine, switch to get.docker.com as |
|
| 188 |
-noted below). |
|
| 189 |
- |
|
| 190 |
-After the binaries and packages are uploaded to test.docker.com, make sure |
|
| 191 |
-they get tested in both Ubuntu and Debian for any obvious installation |
|
| 192 |
-issues or runtime issues. |
|
| 193 |
- |
|
| 194 |
-Announcing on IRC in both `#docker` and `#docker-dev` is a great way to get |
|
| 195 |
-help testing! An easy way to get some useful links for sharing: |
|
| 196 |
- |
|
| 197 |
-```bash |
|
| 198 |
-echo "Ubuntu/Debian: https://test.docker.com/ubuntu or curl -sSL https://test.docker.com/ | sh" |
|
| 199 |
-echo "Linux 64bit binary: https://test.docker.com/builds/Linux/x86_64/docker-${VERSION#v}"
|
|
| 200 |
-echo "Darwin/OSX 64bit client binary: https://test.docker.com/builds/Darwin/x86_64/docker-${VERSION#v}"
|
|
| 201 |
-echo "Darwin/OSX 32bit client binary: https://test.docker.com/builds/Darwin/i386/docker-${VERSION#v}"
|
|
| 202 |
-echo "Linux 64bit tgz: https://test.docker.com/builds/Linux/x86_64/docker-${VERSION#v}.tgz"
|
|
| 203 |
-``` |
|
| 204 |
- |
|
| 205 |
-Once they're tested and reasonably believed to be working, run against |
|
| 206 |
-get.docker.com: |
|
| 207 |
- |
|
| 208 |
-```bash |
|
| 209 |
-docker run \ |
|
| 210 |
- -e AWS_S3_BUCKET=get.docker.com \ |
|
| 211 |
- -e AWS_ACCESS_KEY="..." \ |
|
| 212 |
- -e AWS_SECRET_KEY="..." \ |
|
| 213 |
- -e GPG_PASSPHRASE="..." \ |
|
| 214 |
- -i -t --privileged \ |
|
| 215 |
- docker \ |
|
| 216 |
- hack/release.sh |
|
| 217 |
-``` |
|
| 218 |
- |
|
| 219 |
-### 8. Breakathon |
|
| 220 |
- |
|
| 221 |
-Spend several days along with the community explicitly investing time and |
|
| 222 |
-resources to try and break Docker in every possible way, documenting any |
|
| 223 |
-findings pertinent to the release. This time should be spent testing and |
|
| 224 |
-finding ways in which the release might have caused various features or upgrade |
|
| 225 |
-environments to have issues, not coding. During this time, the release is in |
|
| 226 |
-code freeze, and any additional code changes will be pushed out to the next |
|
| 227 |
-release. |
|
| 228 |
- |
|
| 229 |
-It should include various levels of breaking Docker, beyond just using Docker |
|
| 230 |
-by the book. |
|
| 231 |
- |
|
| 232 |
-Any issues found may still remain issues for this release, but they should be |
|
| 233 |
-documented and give appropriate warnings. |
|
| 234 |
- |
|
| 235 |
-### 9. Apply tag |
|
| 236 |
- |
|
| 237 |
-It's very important that we don't make the tag until after the official |
|
| 238 |
-release is uploaded to get.docker.com! |
|
| 239 |
- |
|
| 240 |
-```bash |
|
| 241 |
-git tag -a $VERSION -m $VERSION bump_$VERSION |
|
| 242 |
-git push origin $VERSION |
|
| 243 |
-``` |
|
| 244 |
- |
|
| 245 |
-### 10. Go to github to merge the `bump_$VERSION` branch into release |
|
| 246 |
- |
|
| 247 |
-Don't forget to push that pretty blue button to delete the leftover |
|
| 248 |
-branch afterwards! |
|
| 249 |
- |
|
| 250 |
-### 11. Update the docs branch |
|
| 251 |
- |
|
| 252 |
-If this is a MAJOR.MINOR.0 release, you need to make an branch for the previous release's |
|
| 253 |
-documentation: |
|
| 254 |
- |
|
| 255 |
-```bash |
|
| 256 |
-git checkout -b docs-$PREVIOUS_MAJOR_MINOR docs |
|
| 257 |
-git fetch |
|
| 258 |
-git reset --hard origin/docs |
|
| 259 |
-git push -f origin docs-$PREVIOUS_MAJOR_MINOR |
|
| 260 |
-``` |
|
| 261 |
- |
|
| 262 |
-You will need the `awsconfig` file added to the `docs/` directory to contain the |
|
| 263 |
-s3 credentials for the bucket you are deploying to. |
|
| 264 |
- |
|
| 265 |
-```bash |
|
| 266 |
-git checkout -b docs release || git checkout docs |
|
| 267 |
-git fetch |
|
| 268 |
-git reset --hard origin/release |
|
| 269 |
-git push -f origin docs |
|
| 270 |
-make AWS_S3_BUCKET=docs.docker.com BUILD_ROOT=yes docs-release |
|
| 271 |
-``` |
|
| 272 |
- |
|
| 273 |
-The docs will appear on http://docs.docker.com/ (though there may be cached |
|
| 274 |
-versions, so its worth checking http://docs.docker.com.s3-website-us-east-1.amazonaws.com/). |
|
| 275 |
-For more information about documentation releases, see `docs/README.md`. |
|
| 276 |
- |
|
| 277 |
-Ask Sven, or JohnC to invalidate the cloudfront cache using the CND Planet chrome applet. |
|
| 278 |
- |
|
| 279 |
-### 12. Create a new pull request to merge release back into master |
|
| 280 |
- |
|
| 281 |
-```bash |
|
| 282 |
-git checkout master |
|
| 283 |
-git fetch |
|
| 284 |
-git reset --hard origin/master |
|
| 285 |
-git merge origin/release |
|
| 286 |
-git checkout -b merge_release_$VERSION |
|
| 287 |
-echo ${VERSION#v}-dev > VERSION
|
|
| 288 |
-git add VERSION |
|
| 289 |
-git commit -m "Change version to $(cat VERSION)" |
|
| 290 |
-git push $GITHUBUSER merge_release_$VERSION |
|
| 291 |
-echo "https://github.com/$GITHUBUSER/docker/compare/docker:master...$GITHUBUSER:merge_release_$VERSION?expand=1" |
|
| 292 |
-``` |
|
| 293 |
- |
|
| 294 |
-Again, get two maintainers to validate, then merge, then push that pretty |
|
| 295 |
-blue button to delete your branch. |
|
| 296 |
- |
|
| 297 |
-### 13. Rejoice and Evangelize! |
|
| 298 |
- |
|
| 299 |
-Congratulations! You're done. |
|
| 300 |
- |
|
| 301 |
-Go forth and announce the glad tidings of the new release in `#docker`, |
|
| 302 |
-`#docker-dev`, on the [mailing list](https://groups.google.com/forum/#!forum/docker-dev), |
|
| 303 |
-and on Twitter! |
| 304 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,43 +0,0 @@ |
| 1 |
-# Docker: Statement of Direction |
|
| 2 |
- |
|
| 3 |
-This document is a high-level overview of where we want to take Docker. |
|
| 4 |
-It is a curated selection of planned improvements which are either important, difficult, or both. |
|
| 5 |
- |
|
| 6 |
-For a more complete view of planned and requested improvements, see [the Github issues](https://github.com/docker/docker/issues). |
|
| 7 |
- |
|
| 8 |
-To suggest changes to the roadmap, including additions, please write the change as if it were already in effect, and make a pull request. |
|
| 9 |
- |
|
| 10 |
- |
|
| 11 |
-## Orchestration |
|
| 12 |
- |
|
| 13 |
-Orchestration touches on several aspects of multi-container applications. These include provisioning hosts with the Docker daemon, organizing and maintaining multiple Docker hosts as a cluster, composing an application using multiple containers, and handling the networking between the containers across the hosts. |
|
| 14 |
- |
|
| 15 |
-Today, users accomplish this using a combination of glue scripts and various tools, like Shipper, Deis, Pipeworks, etc. |
|
| 16 |
- |
|
| 17 |
-We want the Docker API to support all aspects of orchestration natively, so that these tools can cleanly and seamlessly integrate into the Docker user experience, and remain interoperable with each other. |
|
| 18 |
- |
|
| 19 |
-## Networking |
|
| 20 |
- |
|
| 21 |
-The current Docker networking model works for communication between containers all residing on the same host. Since Docker applications in production are made up of many containers deployed across multiple hosts (and sometimes multiple data centers), Docker’s networking model will evolve to accommodate this. An aspect of this evolution includes providing a Networking API to enable alternative implementations. |
|
| 22 |
- |
|
| 23 |
-## Storage |
|
| 24 |
- |
|
| 25 |
-Currently, stateful Docker containers are pinned to specific hosts during their lifetime. To support additional resiliency, capacity management, and load balancing we want to enable live stateful containers to dynamically migrate between hosts. While the Docker Project will provide a “batteries included” implementation for a great out-of-box experience, we will also provide an API for alternative implementations. |
|
| 26 |
- |
|
| 27 |
-## Microsoft Windows |
|
| 28 |
- |
|
| 29 |
-The next Microsoft Windows Server will ship with primitives to support container-based process isolation and resource management. The Docker Project will guide contributors and maintainers developing native Microsoft versions of the Docker Remote API client and Docker daemon to take advantage of these primitives. |
|
| 30 |
- |
|
| 31 |
-## Provenance |
|
| 32 |
- |
|
| 33 |
-When assembling Docker applications we want users to be confident that images they didn’t create themselves are safe to use and build upon. Provenance gives users the capability to digitally verify the inputs and processes constituting an image’s origins and lifecycle events. |
|
| 34 |
- |
|
| 35 |
-## Plugin API |
|
| 36 |
- |
|
| 37 |
-We want Docker to run everywhere, and to integrate with every devops tool. Those are ambitious goals, and the only way to reach them is with the Docker community. For the community to participate fully, we need an API which allows Docker to be deeply and easily customized. |
|
| 38 |
- |
|
| 39 |
-We are working on a plugin API which will make Docker very customization-friendly. We believe it will facilitate the integrations listed above – and many more we didn’t even think about. |
|
| 40 |
- |
|
| 41 |
-## Multi-Architecture Support |
|
| 42 |
- |
|
| 43 |
-Our goal is to make Docker run everywhere. However, currently Docker only runs on x86_64 systems. We plan on expanding architecture support, so that Docker containers can be created and used on more architectures, including ARM, Joyent SmartOS, and Microsoft. |
| 4 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,88 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-# DinD: a wrapper script which allows docker to be run inside a docker container. |
|
| 5 |
-# Original version by Jerome Petazzoni <jerome@docker.com> |
|
| 6 |
-# See the blog post: http://blog.docker.com/2013/09/docker-can-now-run-within-docker/ |
|
| 7 |
-# |
|
| 8 |
-# This script should be executed inside a docker container in privilieged mode |
|
| 9 |
-# ('docker run --privileged', introduced in docker 0.6).
|
|
| 10 |
- |
|
| 11 |
-# Usage: dind CMD [ARG...] |
|
| 12 |
- |
|
| 13 |
-# apparmor sucks and Docker needs to know that it's in a container (c) @tianon |
|
| 14 |
-export container=docker |
|
| 15 |
- |
|
| 16 |
-# First, make sure that cgroups are mounted correctly. |
|
| 17 |
-CGROUP=/cgroup |
|
| 18 |
- |
|
| 19 |
-mkdir -p "$CGROUP" |
|
| 20 |
- |
|
| 21 |
-if ! mountpoint -q "$CGROUP"; then |
|
| 22 |
- mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || {
|
|
| 23 |
- echo >&2 'Could not make a tmpfs mount. Did you use --privileged?' |
|
| 24 |
- exit 1 |
|
| 25 |
- } |
|
| 26 |
-fi |
|
| 27 |
- |
|
| 28 |
-if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then |
|
| 29 |
- mount -t securityfs none /sys/kernel/security || {
|
|
| 30 |
- echo >&2 'Could not mount /sys/kernel/security.' |
|
| 31 |
- echo >&2 'AppArmor detection and -privileged mode might break.' |
|
| 32 |
- } |
|
| 33 |
-fi |
|
| 34 |
- |
|
| 35 |
-# Mount the cgroup hierarchies exactly as they are in the parent system. |
|
| 36 |
-for SUBSYS in $(cut -d: -f2 /proc/1/cgroup); do |
|
| 37 |
- mkdir -p "$CGROUP/$SUBSYS" |
|
| 38 |
- if ! mountpoint -q $CGROUP/$SUBSYS; then |
|
| 39 |
- mount -n -t cgroup -o "$SUBSYS" cgroup "$CGROUP/$SUBSYS" |
|
| 40 |
- fi |
|
| 41 |
- |
|
| 42 |
- # The two following sections address a bug which manifests itself |
|
| 43 |
- # by a cryptic "lxc-start: no ns_cgroup option specified" when |
|
| 44 |
- # trying to start containers withina container. |
|
| 45 |
- # The bug seems to appear when the cgroup hierarchies are not |
|
| 46 |
- # mounted on the exact same directories in the host, and in the |
|
| 47 |
- # container. |
|
| 48 |
- |
|
| 49 |
- # Named, control-less cgroups are mounted with "-o name=foo" |
|
| 50 |
- # (and appear as such under /proc/<pid>/cgroup) but are usually |
|
| 51 |
- # mounted on a directory named "foo" (without the "name=" prefix). |
|
| 52 |
- # Systemd and OpenRC (and possibly others) both create such a |
|
| 53 |
- # cgroup. To avoid the aforementioned bug, we symlink "foo" to |
|
| 54 |
- # "name=foo". This shouldn't have any adverse effect. |
|
| 55 |
- name="${SUBSYS#name=}"
|
|
| 56 |
- if [ "$name" != "$SUBSYS" ]; then |
|
| 57 |
- ln -s "$SUBSYS" "$CGROUP/$name" |
|
| 58 |
- fi |
|
| 59 |
- |
|
| 60 |
- # Likewise, on at least one system, it has been reported that |
|
| 61 |
- # systemd would mount the CPU and CPU accounting controllers |
|
| 62 |
- # (respectively "cpu" and "cpuacct") with "-o cpuacct,cpu" |
|
| 63 |
- # but on a directory called "cpu,cpuacct" (note the inversion |
|
| 64 |
- # in the order of the groups). This tries to work around it. |
|
| 65 |
- if [ "$SUBSYS" = 'cpuacct,cpu' ]; then |
|
| 66 |
- ln -s "$SUBSYS" "$CGROUP/cpu,cpuacct" |
|
| 67 |
- fi |
|
| 68 |
-done |
|
| 69 |
- |
|
| 70 |
-# Note: as I write those lines, the LXC userland tools cannot setup |
|
| 71 |
-# a "sub-container" properly if the "devices" cgroup is not in its |
|
| 72 |
-# own hierarchy. Let's detect this and issue a warning. |
|
| 73 |
-if ! grep -q :devices: /proc/1/cgroup; then |
|
| 74 |
- echo >&2 'WARNING: the "devices" cgroup should be in its own hierarchy.' |
|
| 75 |
-fi |
|
| 76 |
-if ! grep -qw devices /proc/1/cgroup; then |
|
| 77 |
- echo >&2 'WARNING: it looks like the "devices" cgroup is not mounted.' |
|
| 78 |
-fi |
|
| 79 |
- |
|
| 80 |
-# Mount /tmp |
|
| 81 |
-mount -t tmpfs none /tmp |
|
| 82 |
- |
|
| 83 |
-if [ $# -gt 0 ]; then |
|
| 84 |
- exec "$@" |
|
| 85 |
-fi |
|
| 86 |
- |
|
| 87 |
-echo >&2 'ERROR: No command specified.' |
|
| 88 |
-echo >&2 'You probably want to run hack/make.sh, or maybe a shell?' |
| 89 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,15 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-cd "$(dirname "$(readlink -f "$BASH_SOURCE")")/.." |
|
| 5 |
- |
|
| 6 |
-# see also ".mailmap" for how email addresses and names are deduplicated |
|
| 7 |
- |
|
| 8 |
-{
|
|
| 9 |
- cat <<-'EOH' |
|
| 10 |
- # This file lists all individuals having contributed content to the repository. |
|
| 11 |
- # For how it is generated, see `hack/generate-authors.sh`. |
|
| 12 |
- EOH |
|
| 13 |
- echo |
|
| 14 |
- git log --format='%aN <%aE>' | sort -uf |
|
| 15 |
-} > AUTHORS |
| 16 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,62 +0,0 @@ |
| 1 |
-#!/usr/bin/env bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-if [ $# -ne 1 ]; then |
|
| 5 |
- echo >&2 "Usage: $0 PATH" |
|
| 6 |
- echo >&2 "Show the primary and secondary maintainers for a given path" |
|
| 7 |
- exit 1 |
|
| 8 |
-fi |
|
| 9 |
- |
|
| 10 |
-set -e |
|
| 11 |
- |
|
| 12 |
-DEST=$1 |
|
| 13 |
-DESTFILE="" |
|
| 14 |
-if [ ! -d $DEST ]; then |
|
| 15 |
- DESTFILE=$(basename $DEST) |
|
| 16 |
- DEST=$(dirname $DEST) |
|
| 17 |
-fi |
|
| 18 |
- |
|
| 19 |
-MAINTAINERS=() |
|
| 20 |
-cd $DEST |
|
| 21 |
-while true; do |
|
| 22 |
- if [ -e ./MAINTAINERS ]; then |
|
| 23 |
- {
|
|
| 24 |
- while read line; do |
|
| 25 |
- re='^([^:]*): *(.*)$' |
|
| 26 |
- file=$(echo $line | sed -E -n "s/$re/\1/p") |
|
| 27 |
- if [ ! -z "$file" ]; then |
|
| 28 |
- if [ "$file" = "$DESTFILE" ]; then |
|
| 29 |
- echo "Override: $line" |
|
| 30 |
- maintainer=$(echo $line | sed -E -n "s/$re/\2/p") |
|
| 31 |
- MAINTAINERS=("$maintainer" "${MAINTAINERS[@]}")
|
|
| 32 |
- fi |
|
| 33 |
- else |
|
| 34 |
- MAINTAINERS+=("$line");
|
|
| 35 |
- fi |
|
| 36 |
- done; |
|
| 37 |
- } < MAINTAINERS |
|
| 38 |
- break |
|
| 39 |
- fi |
|
| 40 |
- if [ -d .git ]; then |
|
| 41 |
- break |
|
| 42 |
- fi |
|
| 43 |
- if [ "$(pwd)" = "/" ]; then |
|
| 44 |
- break |
|
| 45 |
- fi |
|
| 46 |
- cd .. |
|
| 47 |
-done |
|
| 48 |
- |
|
| 49 |
-PRIMARY="${MAINTAINERS[0]}"
|
|
| 50 |
-PRIMARY_FIRSTNAME=$(echo $PRIMARY | cut -d' ' -f1) |
|
| 51 |
-LGTM_COUNT=${#MAINTAINERS[@]}
|
|
| 52 |
-LGTM_COUNT=$((LGTM_COUNT%2 +1)) |
|
| 53 |
- |
|
| 54 |
-firstname() {
|
|
| 55 |
- echo $1 | cut -d' ' -f1 |
|
| 56 |
-} |
|
| 57 |
- |
|
| 58 |
-echo "A pull request in $1 will need $LGTM_COUNT LGTM's to be merged." |
|
| 59 |
-echo "--- $PRIMARY is the PRIMARY MAINTAINER of $1." |
|
| 60 |
-for SECONDARY in "${MAINTAINERS[@]:1}"; do
|
|
| 61 |
- echo "--- $SECONDARY" |
|
| 62 |
-done |
| 63 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,225 +0,0 @@ |
| 1 |
-#!/bin/sh |
|
| 2 |
-set -e |
|
| 3 |
-# |
|
| 4 |
-# This script is meant for quick & easy install via: |
|
| 5 |
-# 'curl -sSL https://get.docker.com/ | sh' |
|
| 6 |
-# or: |
|
| 7 |
-# 'wget -qO- https://get.docker.com/ | sh' |
|
| 8 |
-# |
|
| 9 |
-# |
|
| 10 |
-# Docker Maintainers: |
|
| 11 |
-# To update this script on https://get.docker.com, |
|
| 12 |
-# use hack/release.sh during a normal release, |
|
| 13 |
-# or the following one-liner for script hotfixes: |
|
| 14 |
-# s3cmd put --acl-public -P hack/install.sh s3://get.docker.com/index |
|
| 15 |
-# |
|
| 16 |
- |
|
| 17 |
-url='https://get.docker.com/' |
|
| 18 |
- |
|
| 19 |
-command_exists() {
|
|
| 20 |
- command -v "$@" > /dev/null 2>&1 |
|
| 21 |
-} |
|
| 22 |
- |
|
| 23 |
-case "$(uname -m)" in |
|
| 24 |
- *64) |
|
| 25 |
- ;; |
|
| 26 |
- *) |
|
| 27 |
- echo >&2 'Error: you are not using a 64bit platform.' |
|
| 28 |
- echo >&2 'Docker currently only supports 64bit platforms.' |
|
| 29 |
- exit 1 |
|
| 30 |
- ;; |
|
| 31 |
-esac |
|
| 32 |
- |
|
| 33 |
-if command_exists docker || command_exists lxc-docker; then |
|
| 34 |
- echo >&2 'Warning: "docker" or "lxc-docker" command appears to already exist.' |
|
| 35 |
- echo >&2 'Please ensure that you do not already have docker installed.' |
|
| 36 |
- echo >&2 'You may press Ctrl+C now to abort this process and rectify this situation.' |
|
| 37 |
- ( set -x; sleep 20 ) |
|
| 38 |
-fi |
|
| 39 |
- |
|
| 40 |
-user="$(id -un 2>/dev/null || true)" |
|
| 41 |
- |
|
| 42 |
-sh_c='sh -c' |
|
| 43 |
-if [ "$user" != 'root' ]; then |
|
| 44 |
- if command_exists sudo; then |
|
| 45 |
- sh_c='sudo -E sh -c' |
|
| 46 |
- elif command_exists su; then |
|
| 47 |
- sh_c='su -c' |
|
| 48 |
- else |
|
| 49 |
- echo >&2 'Error: this installer needs the ability to run commands as root.' |
|
| 50 |
- echo >&2 'We are unable to find either "sudo" or "su" available to make this happen.' |
|
| 51 |
- exit 1 |
|
| 52 |
- fi |
|
| 53 |
-fi |
|
| 54 |
- |
|
| 55 |
-curl='' |
|
| 56 |
-if command_exists curl; then |
|
| 57 |
- curl='curl -sSL' |
|
| 58 |
-elif command_exists wget; then |
|
| 59 |
- curl='wget -qO-' |
|
| 60 |
-elif command_exists busybox && busybox --list-modules | grep -q wget; then |
|
| 61 |
- curl='busybox wget -qO-' |
|
| 62 |
-fi |
|
| 63 |
- |
|
| 64 |
-# perform some very rudimentary platform detection |
|
| 65 |
-lsb_dist='' |
|
| 66 |
-if command_exists lsb_release; then |
|
| 67 |
- lsb_dist="$(lsb_release -si)" |
|
| 68 |
-fi |
|
| 69 |
-if [ -z "$lsb_dist" ] && [ -r /etc/lsb-release ]; then |
|
| 70 |
- lsb_dist="$(. /etc/lsb-release && echo "$DISTRIB_ID")" |
|
| 71 |
-fi |
|
| 72 |
-if [ -z "$lsb_dist" ] && [ -r /etc/debian_version ]; then |
|
| 73 |
- lsb_dist='debian' |
|
| 74 |
-fi |
|
| 75 |
-if [ -z "$lsb_dist" ] && [ -r /etc/fedora-release ]; then |
|
| 76 |
- lsb_dist='fedora' |
|
| 77 |
-fi |
|
| 78 |
-if [ -z "$lsb_dist" ] && [ -r /etc/os-release ]; then |
|
| 79 |
- lsb_dist="$(. /etc/os-release && echo "$ID")" |
|
| 80 |
-fi |
|
| 81 |
- |
|
| 82 |
-lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')" |
|
| 83 |
-case "$lsb_dist" in |
|
| 84 |
- amzn|fedora) |
|
| 85 |
- if [ "$lsb_dist" = 'amzn' ]; then |
|
| 86 |
- ( |
|
| 87 |
- set -x |
|
| 88 |
- $sh_c 'sleep 3; yum -y -q install docker' |
|
| 89 |
- ) |
|
| 90 |
- else |
|
| 91 |
- ( |
|
| 92 |
- set -x |
|
| 93 |
- $sh_c 'sleep 3; yum -y -q install docker-io' |
|
| 94 |
- ) |
|
| 95 |
- fi |
|
| 96 |
- if command_exists docker && [ -e /var/run/docker.sock ]; then |
|
| 97 |
- ( |
|
| 98 |
- set -x |
|
| 99 |
- $sh_c 'docker version' |
|
| 100 |
- ) || true |
|
| 101 |
- fi |
|
| 102 |
- your_user=your-user |
|
| 103 |
- [ "$user" != 'root' ] && your_user="$user" |
|
| 104 |
- echo |
|
| 105 |
- echo 'If you would like to use Docker as a non-root user, you should now consider' |
|
| 106 |
- echo 'adding your user to the "docker" group with something like:' |
|
| 107 |
- echo |
|
| 108 |
- echo ' sudo usermod -aG docker' $your_user |
|
| 109 |
- echo |
|
| 110 |
- echo 'Remember that you will have to log out and back in for this to take effect!' |
|
| 111 |
- echo |
|
| 112 |
- exit 0 |
|
| 113 |
- ;; |
|
| 114 |
- |
|
| 115 |
- ubuntu|debian|linuxmint) |
|
| 116 |
- export DEBIAN_FRONTEND=noninteractive |
|
| 117 |
- |
|
| 118 |
- did_apt_get_update= |
|
| 119 |
- apt_get_update() {
|
|
| 120 |
- if [ -z "$did_apt_get_update" ]; then |
|
| 121 |
- ( set -x; $sh_c 'sleep 3; apt-get update' ) |
|
| 122 |
- did_apt_get_update=1 |
|
| 123 |
- fi |
|
| 124 |
- } |
|
| 125 |
- |
|
| 126 |
- # aufs is preferred over devicemapper; try to ensure the driver is available. |
|
| 127 |
- if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then |
|
| 128 |
- kern_extras="linux-image-extra-$(uname -r)" |
|
| 129 |
- |
|
| 130 |
- apt_get_update |
|
| 131 |
- ( set -x; $sh_c 'sleep 3; apt-get install -y -q '"$kern_extras" ) || true |
|
| 132 |
- |
|
| 133 |
- if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then |
|
| 134 |
- echo >&2 'Warning: tried to install '"$kern_extras"' (for AUFS)' |
|
| 135 |
- echo >&2 ' but we still have no AUFS. Docker may not work. Proceeding anyways!' |
|
| 136 |
- ( set -x; sleep 10 ) |
|
| 137 |
- fi |
|
| 138 |
- fi |
|
| 139 |
- |
|
| 140 |
- # install apparmor utils if they're missing and apparmor is enabled in the kernel |
|
| 141 |
- # otherwise Docker will fail to start |
|
| 142 |
- if [ "$(cat /sys/module/apparmor/parameters/enabled 2>/dev/null)" = 'Y' ]; then |
|
| 143 |
- if command -v apparmor_parser &> /dev/null; then |
|
| 144 |
- echo 'apparmor is enabled in the kernel and apparmor utils were already installed' |
|
| 145 |
- else |
|
| 146 |
- echo 'apparmor is enabled in the kernel, but apparmor_parser missing' |
|
| 147 |
- apt_get_update |
|
| 148 |
- ( set -x; $sh_c 'sleep 3; apt-get install -y -q apparmor' ) |
|
| 149 |
- fi |
|
| 150 |
- fi |
|
| 151 |
- |
|
| 152 |
- if [ ! -e /usr/lib/apt/methods/https ]; then |
|
| 153 |
- apt_get_update |
|
| 154 |
- ( set -x; $sh_c 'sleep 3; apt-get install -y -q apt-transport-https' ) |
|
| 155 |
- fi |
|
| 156 |
- if [ -z "$curl" ]; then |
|
| 157 |
- apt_get_update |
|
| 158 |
- ( set -x; $sh_c 'sleep 3; apt-get install -y -q curl' ) |
|
| 159 |
- curl='curl -sSL' |
|
| 160 |
- fi |
|
| 161 |
- ( |
|
| 162 |
- set -x |
|
| 163 |
- if [ "https://get.docker.com/" = "$url" ]; then |
|
| 164 |
- $sh_c "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9" |
|
| 165 |
- elif [ "https://test.docker.com/" = "$url" ]; then |
|
| 166 |
- $sh_c "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 740B314AE3941731B942C66ADF4FD13717AAD7D6" |
|
| 167 |
- else |
|
| 168 |
- $sh_c "$curl ${url}gpg | apt-key add -"
|
|
| 169 |
- fi |
|
| 170 |
- $sh_c "echo deb ${url}ubuntu docker main > /etc/apt/sources.list.d/docker.list"
|
|
| 171 |
- $sh_c 'sleep 3; apt-get update; apt-get install -y -q lxc-docker' |
|
| 172 |
- ) |
|
| 173 |
- if command_exists docker && [ -e /var/run/docker.sock ]; then |
|
| 174 |
- ( |
|
| 175 |
- set -x |
|
| 176 |
- $sh_c 'docker version' |
|
| 177 |
- ) || true |
|
| 178 |
- fi |
|
| 179 |
- your_user=your-user |
|
| 180 |
- [ "$user" != 'root' ] && your_user="$user" |
|
| 181 |
- echo |
|
| 182 |
- echo 'If you would like to use Docker as a non-root user, you should now consider' |
|
| 183 |
- echo 'adding your user to the "docker" group with something like:' |
|
| 184 |
- echo |
|
| 185 |
- echo ' sudo usermod -aG docker' $your_user |
|
| 186 |
- echo |
|
| 187 |
- echo 'Remember that you will have to log out and back in for this to take effect!' |
|
| 188 |
- echo |
|
| 189 |
- exit 0 |
|
| 190 |
- ;; |
|
| 191 |
- |
|
| 192 |
- gentoo) |
|
| 193 |
- if [ "$url" = "https://test.docker.com/" ]; then |
|
| 194 |
- echo >&2 |
|
| 195 |
- echo >&2 ' You appear to be trying to install the latest nightly build in Gentoo.' |
|
| 196 |
- echo >&2 ' The portage tree should contain the latest stable release of Docker, but' |
|
| 197 |
- echo >&2 ' if you want something more recent, you can always use the live ebuild' |
|
| 198 |
- echo >&2 ' provided in the "docker" overlay available via layman. For more' |
|
| 199 |
- echo >&2 ' instructions, please see the following URL:' |
|
| 200 |
- echo >&2 ' https://github.com/tianon/docker-overlay#using-this-overlay' |
|
| 201 |
- echo >&2 ' After adding the "docker" overlay, you should be able to:' |
|
| 202 |
- echo >&2 ' emerge -av =app-emulation/docker-9999' |
|
| 203 |
- echo >&2 |
|
| 204 |
- exit 1 |
|
| 205 |
- fi |
|
| 206 |
- |
|
| 207 |
- ( |
|
| 208 |
- set -x |
|
| 209 |
- $sh_c 'sleep 3; emerge app-emulation/docker' |
|
| 210 |
- ) |
|
| 211 |
- exit 0 |
|
| 212 |
- ;; |
|
| 213 |
-esac |
|
| 214 |
- |
|
| 215 |
-cat >&2 <<'EOF' |
|
| 216 |
- |
|
| 217 |
- Either your platform is not easily detectable, is not supported by this |
|
| 218 |
- installer script (yet - PRs welcome! [hack/install.sh]), or does not yet have |
|
| 219 |
- a package for Docker. Please visit the following URL for more detailed |
|
| 220 |
- installation instructions: |
|
| 221 |
- |
|
| 222 |
- https://docs.docker.com/en/latest/installation/ |
|
| 223 |
- |
|
| 224 |
-EOF |
|
| 225 |
-exit 1 |
| 226 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,242 +0,0 @@ |
| 1 |
-#!/usr/bin/env bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-# This script builds various binary artifacts from a checkout of the docker |
|
| 5 |
-# source code. |
|
| 6 |
-# |
|
| 7 |
-# Requirements: |
|
| 8 |
-# - The current directory should be a checkout of the docker source code |
|
| 9 |
-# (http://github.com/docker/docker). Whatever version is checked out |
|
| 10 |
-# will be built. |
|
| 11 |
-# - The VERSION file, at the root of the repository, should exist, and |
|
| 12 |
-# will be used as Docker binary version and package version. |
|
| 13 |
-# - The hash of the git commit will also be included in the Docker binary, |
|
| 14 |
-# with the suffix -dirty if the repository isn't clean. |
|
| 15 |
-# - The script is intented to be run inside the docker container specified |
|
| 16 |
-# in the Dockerfile at the root of the source. In other words: |
|
| 17 |
-# DO NOT CALL THIS SCRIPT DIRECTLY. |
|
| 18 |
-# - The right way to call this script is to invoke "make" from |
|
| 19 |
-# your checkout of the Docker repository. |
|
| 20 |
-# the Makefile will do a "docker build -t docker ." and then |
|
| 21 |
-# "docker run hack/make.sh" in the resulting image. |
|
| 22 |
-# |
|
| 23 |
- |
|
| 24 |
-set -o pipefail |
|
| 25 |
- |
|
| 26 |
-export DOCKER_PKG='github.com/docker/docker' |
|
| 27 |
- |
|
| 28 |
-# We're a nice, sexy, little shell script, and people might try to run us; |
|
| 29 |
-# but really, they shouldn't. We want to be in a container! |
|
| 30 |
-if [ "$(pwd)" != "/go/src/$DOCKER_PKG" ] || [ -z "$DOCKER_CROSSPLATFORMS" ]; then |
|
| 31 |
- {
|
|
| 32 |
- echo "# WARNING! I don't seem to be running in the Docker container." |
|
| 33 |
- echo "# The result of this command might be an incorrect build, and will not be" |
|
| 34 |
- echo "# officially supported." |
|
| 35 |
- echo "#" |
|
| 36 |
- echo "# Try this instead: make all" |
|
| 37 |
- echo "#" |
|
| 38 |
- } >&2 |
|
| 39 |
-fi |
|
| 40 |
- |
|
| 41 |
-echo |
|
| 42 |
- |
|
| 43 |
-# List of bundles to create when no argument is passed |
|
| 44 |
-DEFAULT_BUNDLES=( |
|
| 45 |
- validate-dco |
|
| 46 |
- validate-gofmt |
|
| 47 |
- |
|
| 48 |
- binary |
|
| 49 |
- |
|
| 50 |
- test-unit |
|
| 51 |
- test-integration |
|
| 52 |
- test-integration-cli |
|
| 53 |
- |
|
| 54 |
- dynbinary |
|
| 55 |
- dyntest-unit |
|
| 56 |
- dyntest-integration |
|
| 57 |
- |
|
| 58 |
- cover |
|
| 59 |
- cross |
|
| 60 |
- tgz |
|
| 61 |
- ubuntu |
|
| 62 |
-) |
|
| 63 |
- |
|
| 64 |
-VERSION=$(cat ./VERSION) |
|
| 65 |
-if command -v git &> /dev/null && git rev-parse &> /dev/null; then |
|
| 66 |
- GITCOMMIT=$(git rev-parse --short HEAD) |
|
| 67 |
- if [ -n "$(git status --porcelain --untracked-files=no)" ]; then |
|
| 68 |
- GITCOMMIT="$GITCOMMIT-dirty" |
|
| 69 |
- fi |
|
| 70 |
-elif [ "$DOCKER_GITCOMMIT" ]; then |
|
| 71 |
- GITCOMMIT="$DOCKER_GITCOMMIT" |
|
| 72 |
-else |
|
| 73 |
- echo >&2 'error: .git directory missing and DOCKER_GITCOMMIT not specified' |
|
| 74 |
- echo >&2 ' Please either build with the .git directory accessible, or specify the' |
|
| 75 |
- echo >&2 ' exact (--short) commit hash you are building using DOCKER_GITCOMMIT for' |
|
| 76 |
- echo >&2 ' future accountability in diagnosing build issues. Thanks!' |
|
| 77 |
- exit 1 |
|
| 78 |
-fi |
|
| 79 |
- |
|
| 80 |
-if [ "$AUTO_GOPATH" ]; then |
|
| 81 |
- rm -rf .gopath |
|
| 82 |
- mkdir -p .gopath/src/"$(dirname "${DOCKER_PKG}")"
|
|
| 83 |
- ln -sf ../../../.. .gopath/src/"${DOCKER_PKG}"
|
|
| 84 |
- export GOPATH="$(pwd)/.gopath:$(pwd)/vendor" |
|
| 85 |
-fi |
|
| 86 |
- |
|
| 87 |
-if [ ! "$GOPATH" ]; then |
|
| 88 |
- echo >&2 'error: missing GOPATH; please see http://golang.org/doc/code.html#GOPATH' |
|
| 89 |
- echo >&2 ' alternatively, set AUTO_GOPATH=1' |
|
| 90 |
- exit 1 |
|
| 91 |
-fi |
|
| 92 |
- |
|
| 93 |
-if [ -z "$DOCKER_CLIENTONLY" ]; then |
|
| 94 |
- DOCKER_BUILDTAGS+=" daemon" |
|
| 95 |
-fi |
|
| 96 |
- |
|
| 97 |
-# Use these flags when compiling the tests and final binary |
|
| 98 |
-LDFLAGS=' |
|
| 99 |
- -w |
|
| 100 |
- -X '$DOCKER_PKG'/dockerversion.GITCOMMIT "'$GITCOMMIT'" |
|
| 101 |
- -X '$DOCKER_PKG'/dockerversion.VERSION "'$VERSION'" |
|
| 102 |
-' |
|
| 103 |
-LDFLAGS_STATIC='-linkmode external' |
|
| 104 |
-EXTLDFLAGS_STATIC='-static' |
|
| 105 |
-# ORIG_BUILDFLAGS is necessary for the cross target which cannot always build |
|
| 106 |
-# with options like -race. |
|
| 107 |
-ORIG_BUILDFLAGS=( -a -tags "netgo static_build $DOCKER_BUILDTAGS" ) |
|
| 108 |
-BUILDFLAGS=( $BUILDFLAGS "${ORIG_BUILDFLAGS[@]}" )
|
|
| 109 |
-# Test timeout. |
|
| 110 |
-: ${TIMEOUT:=30m}
|
|
| 111 |
-TESTFLAGS+=" -test.timeout=${TIMEOUT}"
|
|
| 112 |
- |
|
| 113 |
-# A few more flags that are specific just to building a completely-static binary (see hack/make/binary) |
|
| 114 |
-# PLEASE do not use these anywhere else. |
|
| 115 |
-EXTLDFLAGS_STATIC_DOCKER="$EXTLDFLAGS_STATIC -lpthread -Wl,--unresolved-symbols=ignore-in-object-files" |
|
| 116 |
-LDFLAGS_STATIC_DOCKER=" |
|
| 117 |
- $LDFLAGS_STATIC |
|
| 118 |
- -X $DOCKER_PKG/dockerversion.IAMSTATIC true |
|
| 119 |
- -extldflags \"$EXTLDFLAGS_STATIC_DOCKER\" |
|
| 120 |
-" |
|
| 121 |
- |
|
| 122 |
-if [ "$(uname -s)" = 'FreeBSD' ]; then |
|
| 123 |
- # Tell cgo the compiler is Clang, not GCC |
|
| 124 |
- # https://code.google.com/p/go/source/browse/src/cmd/cgo/gcc.go?spec=svne77e74371f2340ee08622ce602e9f7b15f29d8d3&r=e6794866ebeba2bf8818b9261b54e2eef1c9e588#752 |
|
| 125 |
- export CC=clang |
|
| 126 |
- |
|
| 127 |
- # "-extld clang" is a workaround for |
|
| 128 |
- # https://code.google.com/p/go/issues/detail?id=6845 |
|
| 129 |
- LDFLAGS="$LDFLAGS -extld clang" |
|
| 130 |
-fi |
|
| 131 |
- |
|
| 132 |
-# If sqlite3.h doesn't exist under /usr/include, |
|
| 133 |
-# check /usr/local/include also just in case |
|
| 134 |
-# (e.g. FreeBSD Ports installs it under the directory) |
|
| 135 |
-if [ ! -e /usr/include/sqlite3.h ] && [ -e /usr/local/include/sqlite3.h ]; then |
|
| 136 |
- export CGO_CFLAGS='-I/usr/local/include' |
|
| 137 |
- export CGO_LDFLAGS='-L/usr/local/lib' |
|
| 138 |
-fi |
|
| 139 |
- |
|
| 140 |
-HAVE_GO_TEST_COVER= |
|
| 141 |
-if \ |
|
| 142 |
- go help testflag | grep -- -cover > /dev/null \ |
|
| 143 |
- && go tool -n cover > /dev/null 2>&1 \ |
|
| 144 |
-; then |
|
| 145 |
- HAVE_GO_TEST_COVER=1 |
|
| 146 |
-fi |
|
| 147 |
- |
|
| 148 |
-# If $TESTFLAGS is set in the environment, it is passed as extra arguments to 'go test'. |
|
| 149 |
-# You can use this to select certain tests to run, eg. |
|
| 150 |
-# |
|
| 151 |
-# TESTFLAGS='-run ^TestBuild$' ./hack/make.sh test |
|
| 152 |
-# |
|
| 153 |
-go_test_dir() {
|
|
| 154 |
- dir=$1 |
|
| 155 |
- coverpkg=$2 |
|
| 156 |
- testcover=() |
|
| 157 |
- if [ "$HAVE_GO_TEST_COVER" ]; then |
|
| 158 |
- # if our current go install has -cover, we want to use it :) |
|
| 159 |
- mkdir -p "$DEST/coverprofiles" |
|
| 160 |
- coverprofile="docker${dir#.}"
|
|
| 161 |
- coverprofile="$DEST/coverprofiles/${coverprofile//\//-}"
|
|
| 162 |
- testcover=( -cover -coverprofile "$coverprofile" $coverpkg ) |
|
| 163 |
- fi |
|
| 164 |
- ( |
|
| 165 |
- export DEST |
|
| 166 |
- echo '+ go test' $TESTFLAGS "${DOCKER_PKG}${dir#.}"
|
|
| 167 |
- cd "$dir" |
|
| 168 |
- go test ${testcover[@]} -ldflags "$LDFLAGS" "${BUILDFLAGS[@]}" $TESTFLAGS
|
|
| 169 |
- ) |
|
| 170 |
-} |
|
| 171 |
- |
|
| 172 |
-# This helper function walks the current directory looking for directories |
|
| 173 |
-# holding certain files ($1 parameter), and prints their paths on standard |
|
| 174 |
-# output, one per line. |
|
| 175 |
-find_dirs() {
|
|
| 176 |
- find . -not \( \ |
|
| 177 |
- \( \ |
|
| 178 |
- -wholename './vendor' \ |
|
| 179 |
- -o -wholename './integration' \ |
|
| 180 |
- -o -wholename './integration-cli' \ |
|
| 181 |
- -o -wholename './contrib' \ |
|
| 182 |
- -o -wholename './pkg/mflag/example' \ |
|
| 183 |
- -o -wholename './.git' \ |
|
| 184 |
- -o -wholename './bundles' \ |
|
| 185 |
- -o -wholename './docs' \ |
|
| 186 |
- -o -wholename './pkg/libcontainer/nsinit' \ |
|
| 187 |
- \) \ |
|
| 188 |
- -prune \ |
|
| 189 |
- \) -name "$1" -print0 | xargs -0n1 dirname | sort -u |
|
| 190 |
-} |
|
| 191 |
- |
|
| 192 |
-hash_files() {
|
|
| 193 |
- while [ $# -gt 0 ]; do |
|
| 194 |
- f="$1" |
|
| 195 |
- shift |
|
| 196 |
- dir="$(dirname "$f")" |
|
| 197 |
- base="$(basename "$f")" |
|
| 198 |
- for hashAlgo in md5 sha256; do |
|
| 199 |
- if command -v "${hashAlgo}sum" &> /dev/null; then
|
|
| 200 |
- ( |
|
| 201 |
- # subshell and cd so that we get output files like: |
|
| 202 |
- # $HASH docker-$VERSION |
|
| 203 |
- # instead of: |
|
| 204 |
- # $HASH /go/src/github.com/.../$VERSION/binary/docker-$VERSION |
|
| 205 |
- cd "$dir" |
|
| 206 |
- "${hashAlgo}sum" "$base" > "$base.$hashAlgo"
|
|
| 207 |
- ) |
|
| 208 |
- fi |
|
| 209 |
- done |
|
| 210 |
- done |
|
| 211 |
-} |
|
| 212 |
- |
|
| 213 |
-bundle() {
|
|
| 214 |
- bundlescript=$1 |
|
| 215 |
- bundle=$(basename $bundlescript) |
|
| 216 |
- echo "---> Making bundle: $bundle (in bundles/$VERSION/$bundle)" |
|
| 217 |
- mkdir -p bundles/$VERSION/$bundle |
|
| 218 |
- source $bundlescript $(pwd)/bundles/$VERSION/$bundle |
|
| 219 |
-} |
|
| 220 |
- |
|
| 221 |
-main() {
|
|
| 222 |
- # We want this to fail if the bundles already exist and cannot be removed. |
|
| 223 |
- # This is to avoid mixing bundles from different versions of the code. |
|
| 224 |
- mkdir -p bundles |
|
| 225 |
- if [ -e "bundles/$VERSION" ]; then |
|
| 226 |
- echo "bundles/$VERSION already exists. Removing." |
|
| 227 |
- rm -fr bundles/$VERSION && mkdir bundles/$VERSION || exit 1 |
|
| 228 |
- echo |
|
| 229 |
- fi |
|
| 230 |
- SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
|
| 231 |
- if [ $# -lt 1 ]; then |
|
| 232 |
- bundles=(${DEFAULT_BUNDLES[@]})
|
|
| 233 |
- else |
|
| 234 |
- bundles=($@) |
|
| 235 |
- fi |
|
| 236 |
- for bundle in ${bundles[@]}; do
|
|
| 237 |
- bundle $SCRIPTDIR/make/$bundle |
|
| 238 |
- echo |
|
| 239 |
- done |
|
| 240 |
-} |
|
| 241 |
- |
|
| 242 |
-main "$@" |
| 243 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,10 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
- |
|
| 3 |
-if ! docker inspect busybox &> /dev/null; then |
|
| 4 |
- if [ -d /docker-busybox ]; then |
|
| 5 |
- source "$(dirname "$BASH_SOURCE")/.ensure-scratch" |
|
| 6 |
- ( set -x; docker build -t busybox /docker-busybox ) |
|
| 7 |
- else |
|
| 8 |
- ( set -x; docker pull busybox ) |
|
| 9 |
- fi |
|
| 10 |
-fi |
| 11 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,21 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
- |
|
| 3 |
-if ! docker inspect scratch &> /dev/null; then |
|
| 4 |
- # let's build a "docker save" tarball for "scratch" |
|
| 5 |
- # see https://github.com/docker/docker/pull/5262 |
|
| 6 |
- # and also https://github.com/docker/docker/issues/4242 |
|
| 7 |
- mkdir -p /docker-scratch |
|
| 8 |
- ( |
|
| 9 |
- cd /docker-scratch |
|
| 10 |
- echo '{"scratch":{"latest":"511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158"}}' > repositories
|
|
| 11 |
- mkdir -p 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 |
|
| 12 |
- ( |
|
| 13 |
- cd 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 |
|
| 14 |
- echo '{"id":"511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158","comment":"Imported from -","created":"2013-06-13T14:03:50.821769-07:00","container_config":{"Hostname":"","Domainname":"","User":"","Memory":0,"MemorySwap":0,"CpuShares":0,"AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"PortSpecs":null,"ExposedPorts":null,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":null,"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"NetworkDisabled":false,"OnBuild":null},"docker_version":"0.4.0","architecture":"x86_64","Size":0}' > json
|
|
| 15 |
- echo '1.0' > VERSION |
|
| 16 |
- tar -cf layer.tar --files-from /dev/null |
|
| 17 |
- ) |
|
| 18 |
- ) |
|
| 19 |
- ( set -x; tar -cf /docker-scratch.tar -C /docker-scratch . ) |
|
| 20 |
- ( set -x; docker load --input /docker-scratch.tar ) |
|
| 21 |
-fi |
| 22 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,26 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-# Compile phase run by parallel in test-unit. No support for coverpkg |
|
| 5 |
- |
|
| 6 |
-dir=$1 |
|
| 7 |
-out_file="$DEST/precompiled/$dir.test" |
|
| 8 |
-testcover=() |
|
| 9 |
-if [ "$HAVE_GO_TEST_COVER" ]; then |
|
| 10 |
- # if our current go install has -cover, we want to use it :) |
|
| 11 |
- mkdir -p "$DEST/coverprofiles" |
|
| 12 |
- coverprofile="docker${dir#.}"
|
|
| 13 |
- coverprofile="$DEST/coverprofiles/${coverprofile//\//-}"
|
|
| 14 |
- testcover=( -cover -coverprofile "$coverprofile" ) # missing $coverpkg |
|
| 15 |
-fi |
|
| 16 |
-if [ "$BUILDFLAGS_FILE" ]; then |
|
| 17 |
- readarray -t BUILDFLAGS < "$BUILDFLAGS_FILE" |
|
| 18 |
-fi |
|
| 19 |
-( |
|
| 20 |
- cd "$dir" |
|
| 21 |
- go test "${testcover[@]}" -ldflags "$LDFLAGS" "${BUILDFLAGS[@]}" $TESTFLAGS -c
|
|
| 22 |
-) |
|
| 23 |
-[ $? -ne 0 ] && return 1 |
|
| 24 |
-mkdir -p "$(dirname "$out_file")" |
|
| 25 |
-mv "$dir/$(basename "$dir").test" "$out_file" |
|
| 26 |
-echo "Precompiled: ${DOCKER_PKG}${dir#.}"
|
| 27 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,33 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
- |
|
| 3 |
-if [ -z "$VALIDATE_UPSTREAM" ]; then |
|
| 4 |
- # this is kind of an expensive check, so let's not do this twice if we |
|
| 5 |
- # are running more than one validate bundlescript |
|
| 6 |
- |
|
| 7 |
- VALIDATE_REPO='https://github.com/docker/docker.git' |
|
| 8 |
- VALIDATE_BRANCH='master' |
|
| 9 |
- |
|
| 10 |
- if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then |
|
| 11 |
- VALIDATE_REPO="https://github.com/${TRAVIS_REPO_SLUG}.git"
|
|
| 12 |
- VALIDATE_BRANCH="${TRAVIS_BRANCH}"
|
|
| 13 |
- fi |
|
| 14 |
- |
|
| 15 |
- VALIDATE_HEAD="$(git rev-parse --verify HEAD)" |
|
| 16 |
- |
|
| 17 |
- git fetch -q "$VALIDATE_REPO" "refs/heads/$VALIDATE_BRANCH" |
|
| 18 |
- VALIDATE_UPSTREAM="$(git rev-parse --verify FETCH_HEAD)" |
|
| 19 |
- |
|
| 20 |
- VALIDATE_COMMIT_LOG="$VALIDATE_UPSTREAM..$VALIDATE_HEAD" |
|
| 21 |
- VALIDATE_COMMIT_DIFF="$VALIDATE_UPSTREAM...$VALIDATE_HEAD" |
|
| 22 |
- |
|
| 23 |
- validate_diff() {
|
|
| 24 |
- if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then |
|
| 25 |
- git diff "$VALIDATE_COMMIT_DIFF" "$@" |
|
| 26 |
- fi |
|
| 27 |
- } |
|
| 28 |
- validate_log() {
|
|
| 29 |
- if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then |
|
| 30 |
- git log "$VALIDATE_COMMIT_LOG" "$@" |
|
| 31 |
- fi |
|
| 32 |
- } |
|
| 33 |
-fi |
| 34 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,17 +0,0 @@ |
| 1 |
-This directory holds scripts called by `make.sh` in the parent directory. |
|
| 2 |
- |
|
| 3 |
-Each script is named after the bundle it creates. |
|
| 4 |
-They should not be called directly - instead, pass it as argument to make.sh, for example: |
|
| 5 |
- |
|
| 6 |
-``` |
|
| 7 |
-./hack/make.sh test |
|
| 8 |
-./hack/make.sh binary ubuntu |
|
| 9 |
- |
|
| 10 |
-# Or to run all bundles: |
|
| 11 |
-./hack/make.sh |
|
| 12 |
-``` |
|
| 13 |
- |
|
| 14 |
-To add a bundle: |
|
| 15 |
- |
|
| 16 |
-* Create a shell-compatible file here |
|
| 17 |
-* Add it to $DEFAULT_BUNDLES in make.sh |
| 18 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,17 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-DEST=$1 |
|
| 5 |
- |
|
| 6 |
-go build \ |
|
| 7 |
- -o "$DEST/docker-$VERSION" \ |
|
| 8 |
- "${BUILDFLAGS[@]}" \
|
|
| 9 |
- -ldflags " |
|
| 10 |
- $LDFLAGS |
|
| 11 |
- $LDFLAGS_STATIC_DOCKER |
|
| 12 |
- " \ |
|
| 13 |
- ./docker |
|
| 14 |
-echo "Created binary: $DEST/docker-$VERSION" |
|
| 15 |
-ln -sf "docker-$VERSION" "$DEST/docker" |
|
| 16 |
- |
|
| 17 |
-hash_files "$DEST/docker-$VERSION" |
| 18 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,22 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-DEST="$1" |
|
| 5 |
- |
|
| 6 |
-bundle_cover() {
|
|
| 7 |
- coverprofiles=( "$DEST/../"*"/coverprofiles/"* ) |
|
| 8 |
- for p in "${coverprofiles[@]}"; do
|
|
| 9 |
- echo |
|
| 10 |
- ( |
|
| 11 |
- set -x |
|
| 12 |
- go tool cover -func="$p" |
|
| 13 |
- ) |
|
| 14 |
- done |
|
| 15 |
-} |
|
| 16 |
- |
|
| 17 |
-if [ "$HAVE_GO_TEST_COVER" ]; then |
|
| 18 |
- bundle_cover 2>&1 | tee "$DEST/report.log" |
|
| 19 |
-else |
|
| 20 |
- echo >&2 'warning: the current version of go does not support -cover' |
|
| 21 |
- echo >&2 ' skipping test coverage report' |
|
| 22 |
-fi |
| 23 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,33 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-DEST=$1 |
|
| 5 |
- |
|
| 6 |
-# explicit list of os/arch combos that support being a daemon |
|
| 7 |
-declare -A daemonSupporting |
|
| 8 |
-daemonSupporting=( |
|
| 9 |
- [linux/amd64]=1 |
|
| 10 |
-) |
|
| 11 |
- |
|
| 12 |
-# if we have our linux/amd64 version compiled, let's symlink it in |
|
| 13 |
-if [ -x "$DEST/../binary/docker-$VERSION" ]; then |
|
| 14 |
- mkdir -p "$DEST/linux/amd64" |
|
| 15 |
- ( |
|
| 16 |
- cd "$DEST/linux/amd64" |
|
| 17 |
- ln -s ../../../binary/* ./ |
|
| 18 |
- ) |
|
| 19 |
- echo "Created symlinks:" "$DEST/linux/amd64/"* |
|
| 20 |
-fi |
|
| 21 |
- |
|
| 22 |
-for platform in $DOCKER_CROSSPLATFORMS; do |
|
| 23 |
- ( |
|
| 24 |
- mkdir -p "$DEST/$platform" # bundles/VERSION/cross/GOOS/GOARCH/docker-VERSION |
|
| 25 |
- export GOOS=${platform%/*}
|
|
| 26 |
- export GOARCH=${platform##*/}
|
|
| 27 |
- if [ -z "${daemonSupporting[$platform]}" ]; then
|
|
| 28 |
- export LDFLAGS_STATIC_DOCKER="" # we just need a simple client for these platforms |
|
| 29 |
- export BUILDFLAGS=( "${ORIG_BUILDFLAGS[@]/ daemon/}" ) # remove the "daemon" build tag from platforms that aren't supported
|
|
| 30 |
- fi |
|
| 31 |
- source "$(dirname "$BASH_SOURCE")/binary" "$DEST/$platform" |
|
| 32 |
- ) |
|
| 33 |
-done |
| 34 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,45 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-DEST=$1 |
|
| 5 |
- |
|
| 6 |
-if [ -z "$DOCKER_CLIENTONLY" ]; then |
|
| 7 |
- # dockerinit still needs to be a static binary, even if docker is dynamic |
|
| 8 |
- go build \ |
|
| 9 |
- -o "$DEST/dockerinit-$VERSION" \ |
|
| 10 |
- "${BUILDFLAGS[@]}" \
|
|
| 11 |
- -ldflags " |
|
| 12 |
- $LDFLAGS |
|
| 13 |
- $LDFLAGS_STATIC |
|
| 14 |
- -extldflags \"$EXTLDFLAGS_STATIC\" |
|
| 15 |
- " \ |
|
| 16 |
- ./dockerinit |
|
| 17 |
- echo "Created binary: $DEST/dockerinit-$VERSION" |
|
| 18 |
- ln -sf "dockerinit-$VERSION" "$DEST/dockerinit" |
|
| 19 |
- |
|
| 20 |
- hash_files "$DEST/dockerinit-$VERSION" |
|
| 21 |
- |
|
| 22 |
- sha1sum= |
|
| 23 |
- if command -v sha1sum &> /dev/null; then |
|
| 24 |
- sha1sum=sha1sum |
|
| 25 |
- elif command -v shasum &> /dev/null; then |
|
| 26 |
- # Mac OS X - why couldn't they just use the same command name and be happy? |
|
| 27 |
- sha1sum=shasum |
|
| 28 |
- else |
|
| 29 |
- echo >&2 'error: cannot find sha1sum command or equivalent' |
|
| 30 |
- exit 1 |
|
| 31 |
- fi |
|
| 32 |
- |
|
| 33 |
- # sha1 our new dockerinit to ensure separate docker and dockerinit always run in a perfect pair compiled for one another |
|
| 34 |
- export DOCKER_INITSHA1="$($sha1sum $DEST/dockerinit-$VERSION | cut -d' ' -f1)" |
|
| 35 |
-else |
|
| 36 |
- # DOCKER_CLIENTONLY must be truthy, so we don't need to bother with dockerinit :) |
|
| 37 |
- export DOCKER_INITSHA1="" |
|
| 38 |
-fi |
|
| 39 |
-# exported so that "dyntest" can easily access it later without recalculating it |
|
| 40 |
- |
|
| 41 |
-( |
|
| 42 |
- export LDFLAGS_STATIC_DOCKER="-X $DOCKER_PKG/dockerversion.INITSHA1 \"$DOCKER_INITSHA1\" -X $DOCKER_PKG/dockerversion.INITPATH \"$DOCKER_INITPATH\"" |
|
| 43 |
- export BUILDFLAGS=( "${BUILDFLAGS[@]/netgo /}" ) # disable netgo, since we don't need it for a dynamic binary
|
|
| 44 |
- source "$(dirname "$BASH_SOURCE")/binary" |
|
| 45 |
-) |
| 46 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,18 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-DEST=$1 |
|
| 5 |
-INIT=$DEST/../dynbinary/dockerinit-$VERSION |
|
| 6 |
- |
|
| 7 |
-if [ ! -x "$INIT" ]; then |
|
| 8 |
- echo >&2 'error: dynbinary must be run before dyntest-integration' |
|
| 9 |
- false |
|
| 10 |
-fi |
|
| 11 |
- |
|
| 12 |
-( |
|
| 13 |
- export TEST_DOCKERINIT_PATH="$INIT" |
|
| 14 |
- export LDFLAGS_STATIC_DOCKER=" |
|
| 15 |
- -X $DOCKER_PKG/dockerversion.INITSHA1 \"$DOCKER_INITSHA1\" |
|
| 16 |
- " |
|
| 17 |
- source "$(dirname "$BASH_SOURCE")/test-integration" |
|
| 18 |
-) |
| 19 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,18 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-DEST=$1 |
|
| 5 |
-INIT=$DEST/../dynbinary/dockerinit-$VERSION |
|
| 6 |
- |
|
| 7 |
-if [ ! -x "$INIT" ]; then |
|
| 8 |
- echo >&2 'error: dynbinary must be run before dyntest-unit' |
|
| 9 |
- false |
|
| 10 |
-fi |
|
| 11 |
- |
|
| 12 |
-( |
|
| 13 |
- export TEST_DOCKERINIT_PATH="$INIT" |
|
| 14 |
- export LDFLAGS_STATIC_DOCKER=" |
|
| 15 |
- -X $DOCKER_PKG/dockerversion.INITSHA1 \"$DOCKER_INITSHA1\" |
|
| 16 |
- " |
|
| 17 |
- source "$(dirname "$BASH_SOURCE")/test-unit" |
|
| 18 |
-) |
| 19 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,15 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-DEST=$1 |
|
| 5 |
- |
|
| 6 |
-bundle_test_integration() {
|
|
| 7 |
- LDFLAGS="$LDFLAGS $LDFLAGS_STATIC_DOCKER" go_test_dir ./integration \ |
|
| 8 |
- "-coverpkg $(find_dirs '*.go' | sed 's,^\.,'$DOCKER_PKG',g' | paste -d, -s)" |
|
| 9 |
-} |
|
| 10 |
- |
|
| 11 |
-# this "grep" hides some really irritating warnings that "go test -coverpkg" |
|
| 12 |
-# spews when it is given packages that aren't used |
|
| 13 |
-exec > >(tee -a $DEST/test.log) 2>&1 |
|
| 14 |
-bundle_test_integration 2>&1 \ |
|
| 15 |
- | grep --line-buffered -v '^warning: no packages being tested depend on ' |
| 16 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,46 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-DEST=$1 |
|
| 5 |
- |
|
| 6 |
-DOCKER_GRAPHDRIVER=${DOCKER_GRAPHDRIVER:-vfs}
|
|
| 7 |
-DOCKER_EXECDRIVER=${DOCKER_EXECDRIVER:-native}
|
|
| 8 |
- |
|
| 9 |
-bundle_test_integration_cli() {
|
|
| 10 |
- go_test_dir ./integration-cli |
|
| 11 |
-} |
|
| 12 |
- |
|
| 13 |
-# subshell so that we can export PATH without breaking other things |
|
| 14 |
-exec > >(tee -a $DEST/test.log) 2>&1 |
|
| 15 |
-( |
|
| 16 |
- export PATH="$DEST/../binary:$DEST/../dynbinary:$PATH" |
|
| 17 |
- |
|
| 18 |
- if ! command -v docker &> /dev/null; then |
|
| 19 |
- echo >&2 'error: binary or dynbinary must be run before test-integration-cli' |
|
| 20 |
- false |
|
| 21 |
- fi |
|
| 22 |
- |
|
| 23 |
- # intentionally open a couple bogus file descriptors to help test that they get scrubbed in containers |
|
| 24 |
- exec 41>&1 42>&2 |
|
| 25 |
- |
|
| 26 |
- ( set -x; exec \ |
|
| 27 |
- docker --daemon --debug \ |
|
| 28 |
- --storage-driver "$DOCKER_GRAPHDRIVER" \ |
|
| 29 |
- --exec-driver "$DOCKER_EXECDRIVER" \ |
|
| 30 |
- --pidfile "$DEST/docker.pid" \ |
|
| 31 |
- &> "$DEST/docker.log" |
|
| 32 |
- ) & |
|
| 33 |
- |
|
| 34 |
- # pull the busybox image before running the tests |
|
| 35 |
- sleep 2 |
|
| 36 |
- |
|
| 37 |
- source "$(dirname "$BASH_SOURCE")/.ensure-busybox" |
|
| 38 |
- |
|
| 39 |
- bundle_test_integration_cli |
|
| 40 |
- |
|
| 41 |
- for pid in $(find "$DEST" -name docker.pid); do |
|
| 42 |
- DOCKER_PID=$(set -x; cat "$pid") |
|
| 43 |
- ( set -x; kill $DOCKER_PID ) |
|
| 44 |
- wait $DOCKERD_PID || true |
|
| 45 |
- done |
|
| 46 |
-) |
| 47 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,86 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-DEST=$1 |
|
| 5 |
-: ${PARALLEL_JOBS:=$(nproc)}
|
|
| 6 |
- |
|
| 7 |
-RED=$'\033[31m' |
|
| 8 |
-GREEN=$'\033[32m' |
|
| 9 |
-TEXTRESET=$'\033[0m' # reset the foreground colour |
|
| 10 |
- |
|
| 11 |
-# Run Docker's test suite, including sub-packages, and store their output as a bundle |
|
| 12 |
-# If $TESTFLAGS is set in the environment, it is passed as extra arguments to 'go test'. |
|
| 13 |
-# You can use this to select certain tests to run, eg. |
|
| 14 |
-# |
|
| 15 |
-# TESTFLAGS='-run ^TestBuild$' ./hack/make.sh test-unit |
|
| 16 |
-# |
|
| 17 |
-bundle_test_unit() {
|
|
| 18 |
- {
|
|
| 19 |
- date |
|
| 20 |
- |
|
| 21 |
- # Run all the tests if no TESTDIRS were specified. |
|
| 22 |
- if [ -z "$TESTDIRS" ]; then |
|
| 23 |
- TESTDIRS=$(find_dirs '*_test.go') |
|
| 24 |
- fi |
|
| 25 |
- ( |
|
| 26 |
- export LDFLAGS="$LDFLAGS $LDFLAGS_STATIC_DOCKER" |
|
| 27 |
- export TESTFLAGS |
|
| 28 |
- export HAVE_GO_TEST_COVER |
|
| 29 |
- export DEST |
|
| 30 |
- if command -v parallel &> /dev/null; then |
|
| 31 |
- # accomodate parallel to be able to access variables |
|
| 32 |
- export SHELL="$BASH" |
|
| 33 |
- export HOME="$(mktemp -d)" |
|
| 34 |
- mkdir -p "$HOME/.parallel" |
|
| 35 |
- touch "$HOME/.parallel/ignored_vars" |
|
| 36 |
- |
|
| 37 |
- # some hack to export array variables |
|
| 38 |
- export BUILDFLAGS_FILE="$HOME/buildflags_file" |
|
| 39 |
- ( IFS=$'\n'; echo "${BUILDFLAGS[*]}" ) > "$BUILDFLAGS_FILE"
|
|
| 40 |
- |
|
| 41 |
- echo "$TESTDIRS" | parallel --jobs "$PARALLEL_JOBS" --halt 2 --env _ "$(dirname "$BASH_SOURCE")/.go-compile-test-dir" |
|
| 42 |
- rm -rf "$HOME" |
|
| 43 |
- else |
|
| 44 |
- # aww, no "parallel" available - fall back to boring |
|
| 45 |
- for test_dir in $TESTDIRS; do |
|
| 46 |
- "$(dirname "$BASH_SOURCE")/.go-compile-test-dir" "$test_dir" |
|
| 47 |
- done |
|
| 48 |
- fi |
|
| 49 |
- ) |
|
| 50 |
- echo "$TESTDIRS" | go_run_test_dir |
|
| 51 |
- } |
|
| 52 |
-} |
|
| 53 |
- |
|
| 54 |
-go_run_test_dir() {
|
|
| 55 |
- TESTS_FAILED=() |
|
| 56 |
- while read dir; do |
|
| 57 |
- echo |
|
| 58 |
- echo '+ go test' $TESTFLAGS "${DOCKER_PKG}${dir#.}"
|
|
| 59 |
- precompiled="$DEST/precompiled/$dir.test" |
|
| 60 |
- if ! ( cd "$dir" && "$precompiled" $TESTFLAGS ); then |
|
| 61 |
- TESTS_FAILED+=("$dir")
|
|
| 62 |
- echo |
|
| 63 |
- echo "${RED}Tests failed: $dir${TEXTRESET}"
|
|
| 64 |
- sleep 1 # give it a second, so observers watching can take note |
|
| 65 |
- fi |
|
| 66 |
- done |
|
| 67 |
- |
|
| 68 |
- echo |
|
| 69 |
- echo |
|
| 70 |
- echo |
|
| 71 |
- |
|
| 72 |
- # if some tests fail, we want the bundlescript to fail, but we want to |
|
| 73 |
- # try running ALL the tests first, hence TESTS_FAILED |
|
| 74 |
- if [ "${#TESTS_FAILED[@]}" -gt 0 ]; then
|
|
| 75 |
- echo "${RED}Test failures in: ${TESTS_FAILED[@]}${TEXTRESET}"
|
|
| 76 |
- echo |
|
| 77 |
- false |
|
| 78 |
- else |
|
| 79 |
- echo "${GREEN}Test success${TEXTRESET}"
|
|
| 80 |
- echo |
|
| 81 |
- true |
|
| 82 |
- fi |
|
| 83 |
-} |
|
| 84 |
- |
|
| 85 |
-exec > >(tee -a $DEST/test.log) 2>&1 |
|
| 86 |
-bundle_test_unit |
| 87 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,31 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
- |
|
| 3 |
-DEST="$1" |
|
| 4 |
-CROSS="$DEST/../cross" |
|
| 5 |
- |
|
| 6 |
-set -e |
|
| 7 |
- |
|
| 8 |
-if [ ! -d "$CROSS/linux/amd64" ]; then |
|
| 9 |
- echo >&2 'error: binary and cross must be run before tgz' |
|
| 10 |
- false |
|
| 11 |
-fi |
|
| 12 |
- |
|
| 13 |
-for d in "$CROSS/"*/*; do |
|
| 14 |
- GOARCH="$(basename "$d")" |
|
| 15 |
- GOOS="$(basename "$(dirname "$d")")" |
|
| 16 |
- mkdir -p "$DEST/$GOOS/$GOARCH" |
|
| 17 |
- TGZ="$DEST/$GOOS/$GOARCH/docker-$VERSION.tgz" |
|
| 18 |
- |
|
| 19 |
- mkdir -p "$DEST/build" |
|
| 20 |
- |
|
| 21 |
- mkdir -p "$DEST/build/usr/local/bin" |
|
| 22 |
- cp -L "$d/docker-$VERSION" "$DEST/build/usr/local/bin/docker" |
|
| 23 |
- |
|
| 24 |
- tar --numeric-owner --owner 0 -C "$DEST/build" -czf "$TGZ" usr |
|
| 25 |
- |
|
| 26 |
- hash_files "$TGZ" |
|
| 27 |
- |
|
| 28 |
- rm -rf "$DEST/build" |
|
| 29 |
- |
|
| 30 |
- echo "Created tgz: $TGZ" |
|
| 31 |
-done |
| 32 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,176 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
- |
|
| 3 |
-DEST=$1 |
|
| 4 |
- |
|
| 5 |
-PKGVERSION="$VERSION" |
|
| 6 |
-if [ -n "$(git status --porcelain)" ]; then |
|
| 7 |
- PKGVERSION="$PKGVERSION-$(date +%Y%m%d%H%M%S)-$GITCOMMIT" |
|
| 8 |
-fi |
|
| 9 |
- |
|
| 10 |
-PACKAGE_ARCHITECTURE="$(dpkg-architecture -qDEB_HOST_ARCH)" |
|
| 11 |
-PACKAGE_URL="http://www.docker.com/" |
|
| 12 |
-PACKAGE_MAINTAINER="support@docker.com" |
|
| 13 |
-PACKAGE_DESCRIPTION="Linux container runtime |
|
| 14 |
-Docker complements LXC with a high-level API which operates at the process |
|
| 15 |
-level. It runs unix processes with strong guarantees of isolation and |
|
| 16 |
-repeatability across servers. |
|
| 17 |
-Docker is a great building block for automating distributed systems: |
|
| 18 |
-large-scale web deployments, database clusters, continuous deployment systems, |
|
| 19 |
-private PaaS, service-oriented architectures, etc." |
|
| 20 |
-PACKAGE_LICENSE="Apache-2.0" |
|
| 21 |
- |
|
| 22 |
-# Build docker as an ubuntu package using FPM and REPREPRO (sue me). |
|
| 23 |
-# bundle_binary must be called first. |
|
| 24 |
-bundle_ubuntu() {
|
|
| 25 |
- DIR=$DEST/build |
|
| 26 |
- |
|
| 27 |
- # Include our udev rules |
|
| 28 |
- mkdir -p $DIR/etc/udev/rules.d |
|
| 29 |
- cp contrib/udev/80-docker.rules $DIR/etc/udev/rules.d/ |
|
| 30 |
- |
|
| 31 |
- # Include our init scripts |
|
| 32 |
- mkdir -p $DIR/etc/init |
|
| 33 |
- cp contrib/init/upstart/docker.conf $DIR/etc/init/ |
|
| 34 |
- mkdir -p $DIR/etc/init.d |
|
| 35 |
- cp contrib/init/sysvinit-debian/docker $DIR/etc/init.d/ |
|
| 36 |
- mkdir -p $DIR/etc/default |
|
| 37 |
- cp contrib/init/sysvinit-debian/docker.default $DIR/etc/default/docker |
|
| 38 |
- mkdir -p $DIR/lib/systemd/system |
|
| 39 |
- cp contrib/init/systemd/docker.{service,socket} $DIR/lib/systemd/system/
|
|
| 40 |
- |
|
| 41 |
- # Include contributed completions |
|
| 42 |
- mkdir -p $DIR/etc/bash_completion.d |
|
| 43 |
- cp contrib/completion/bash/docker $DIR/etc/bash_completion.d/ |
|
| 44 |
- mkdir -p $DIR/usr/share/zsh/vendor-completions |
|
| 45 |
- cp contrib/completion/zsh/_docker $DIR/usr/share/zsh/vendor-completions/ |
|
| 46 |
- mkdir -p $DIR/etc/fish/completions |
|
| 47 |
- cp contrib/completion/fish/docker.fish $DIR/etc/fish/completions/ |
|
| 48 |
- |
|
| 49 |
- # Include contributed man pages |
|
| 50 |
- docs/man/md2man-all.sh -q |
|
| 51 |
- manRoot="$DIR/usr/share/man" |
|
| 52 |
- mkdir -p "$manRoot" |
|
| 53 |
- for manDir in docs/man/man?; do |
|
| 54 |
- manBase="$(basename "$manDir")" # "man1" |
|
| 55 |
- for manFile in "$manDir"/*; do |
|
| 56 |
- manName="$(basename "$manFile")" # "docker-build.1" |
|
| 57 |
- mkdir -p "$manRoot/$manBase" |
|
| 58 |
- gzip -c "$manFile" > "$manRoot/$manBase/$manName.gz" |
|
| 59 |
- done |
|
| 60 |
- done |
|
| 61 |
- |
|
| 62 |
- # Copy the binary |
|
| 63 |
- # This will fail if the binary bundle hasn't been built |
|
| 64 |
- mkdir -p $DIR/usr/bin |
|
| 65 |
- cp $DEST/../binary/docker-$VERSION $DIR/usr/bin/docker |
|
| 66 |
- |
|
| 67 |
- # Generate postinst/prerm/postrm scripts |
|
| 68 |
- cat > $DEST/postinst <<'EOF' |
|
| 69 |
-#!/bin/sh |
|
| 70 |
-set -e |
|
| 71 |
-set -u |
|
| 72 |
- |
|
| 73 |
-if [ "$1" = 'configure' ] && [ -z "$2" ]; then |
|
| 74 |
- if ! getent group docker > /dev/null; then |
|
| 75 |
- groupadd --system docker |
|
| 76 |
- fi |
|
| 77 |
-fi |
|
| 78 |
- |
|
| 79 |
-if ! { [ -x /sbin/initctl ] && /sbin/initctl version 2>/dev/null | grep -q upstart; }; then
|
|
| 80 |
- # we only need to do this if upstart isn't in charge |
|
| 81 |
- update-rc.d docker defaults > /dev/null || true |
|
| 82 |
-fi |
|
| 83 |
-if [ -n "$2" ]; then |
|
| 84 |
- _dh_action=restart |
|
| 85 |
-else |
|
| 86 |
- _dh_action=start |
|
| 87 |
-fi |
|
| 88 |
-service docker $_dh_action 2>/dev/null || true |
|
| 89 |
- |
|
| 90 |
-#DEBHELPER# |
|
| 91 |
-EOF |
|
| 92 |
- cat > $DEST/prerm <<'EOF' |
|
| 93 |
-#!/bin/sh |
|
| 94 |
-set -e |
|
| 95 |
-set -u |
|
| 96 |
- |
|
| 97 |
-service docker stop 2>/dev/null || true |
|
| 98 |
- |
|
| 99 |
-#DEBHELPER# |
|
| 100 |
-EOF |
|
| 101 |
- cat > $DEST/postrm <<'EOF' |
|
| 102 |
-#!/bin/sh |
|
| 103 |
-set -e |
|
| 104 |
-set -u |
|
| 105 |
- |
|
| 106 |
-if [ "$1" = "purge" ] ; then |
|
| 107 |
- update-rc.d docker remove > /dev/null || true |
|
| 108 |
-fi |
|
| 109 |
- |
|
| 110 |
-# In case this system is running systemd, we make systemd reload the unit files |
|
| 111 |
-# to pick up changes. |
|
| 112 |
-if [ -d /run/systemd/system ] ; then |
|
| 113 |
- systemctl --system daemon-reload > /dev/null || true |
|
| 114 |
-fi |
|
| 115 |
- |
|
| 116 |
-#DEBHELPER# |
|
| 117 |
-EOF |
|
| 118 |
- # TODO swaths of these were borrowed from debhelper's auto-inserted stuff, because we're still using fpm - we need to use debhelper instead, and somehow reconcile Ubuntu that way |
|
| 119 |
- chmod +x $DEST/postinst $DEST/prerm $DEST/postrm |
|
| 120 |
- |
|
| 121 |
- ( |
|
| 122 |
- # switch directories so we create *.deb in the right folder |
|
| 123 |
- cd $DEST |
|
| 124 |
- |
|
| 125 |
- # create lxc-docker-VERSION package |
|
| 126 |
- fpm -s dir -C $DIR \ |
|
| 127 |
- --name lxc-docker-$VERSION --version $PKGVERSION \ |
|
| 128 |
- --after-install $DEST/postinst \ |
|
| 129 |
- --before-remove $DEST/prerm \ |
|
| 130 |
- --after-remove $DEST/postrm \ |
|
| 131 |
- --architecture "$PACKAGE_ARCHITECTURE" \ |
|
| 132 |
- --prefix / \ |
|
| 133 |
- --depends iptables \ |
|
| 134 |
- --deb-recommends aufs-tools \ |
|
| 135 |
- --deb-recommends ca-certificates \ |
|
| 136 |
- --deb-recommends git \ |
|
| 137 |
- --deb-recommends xz-utils \ |
|
| 138 |
- --deb-recommends 'cgroupfs-mount | cgroup-lite' \ |
|
| 139 |
- --description "$PACKAGE_DESCRIPTION" \ |
|
| 140 |
- --maintainer "$PACKAGE_MAINTAINER" \ |
|
| 141 |
- --conflicts docker \ |
|
| 142 |
- --conflicts docker.io \ |
|
| 143 |
- --conflicts lxc-docker-virtual-package \ |
|
| 144 |
- --provides lxc-docker \ |
|
| 145 |
- --provides lxc-docker-virtual-package \ |
|
| 146 |
- --replaces lxc-docker \ |
|
| 147 |
- --replaces lxc-docker-virtual-package \ |
|
| 148 |
- --url "$PACKAGE_URL" \ |
|
| 149 |
- --license "$PACKAGE_LICENSE" \ |
|
| 150 |
- --config-files /etc/udev/rules.d/80-docker.rules \ |
|
| 151 |
- --config-files /etc/init/docker.conf \ |
|
| 152 |
- --config-files /etc/init.d/docker \ |
|
| 153 |
- --config-files /etc/default/docker \ |
|
| 154 |
- --deb-compression gz \ |
|
| 155 |
- -t deb . |
|
| 156 |
- # TODO replace "Suggests: cgroup-lite" with "Recommends: cgroupfs-mount | cgroup-lite" once cgroupfs-mount is available |
|
| 157 |
- |
|
| 158 |
- # create empty lxc-docker wrapper package |
|
| 159 |
- fpm -s empty \ |
|
| 160 |
- --name lxc-docker --version $PKGVERSION \ |
|
| 161 |
- --architecture "$PACKAGE_ARCHITECTURE" \ |
|
| 162 |
- --depends lxc-docker-$VERSION \ |
|
| 163 |
- --description "$PACKAGE_DESCRIPTION" \ |
|
| 164 |
- --maintainer "$PACKAGE_MAINTAINER" \ |
|
| 165 |
- --url "$PACKAGE_URL" \ |
|
| 166 |
- --license "$PACKAGE_LICENSE" \ |
|
| 167 |
- --deb-compression gz \ |
|
| 168 |
- -t deb |
|
| 169 |
- ) |
|
| 170 |
- |
|
| 171 |
- # clean up after ourselves so we have a clean output directory |
|
| 172 |
- rm $DEST/postinst $DEST/prerm $DEST/postrm |
|
| 173 |
- rm -r $DIR |
|
| 174 |
-} |
|
| 175 |
- |
|
| 176 |
-bundle_ubuntu |
| 177 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,56 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
- |
|
| 3 |
-source "$(dirname "$BASH_SOURCE")/.validate" |
|
| 4 |
- |
|
| 5 |
-adds=$(validate_diff --numstat | awk '{ s += $1 } END { print s }')
|
|
| 6 |
-dels=$(validate_diff --numstat | awk '{ s += $2 } END { print s }')
|
|
| 7 |
-notDocs="$(validate_diff --numstat | awk '$3 !~ /^docs\// { print $3 }')"
|
|
| 8 |
- |
|
| 9 |
-: ${adds:=0}
|
|
| 10 |
-: ${dels:=0}
|
|
| 11 |
- |
|
| 12 |
-# "Username may only contain alphanumeric characters or dashes and cannot begin with a dash" |
|
| 13 |
-githubUsernameRegex='[a-zA-Z0-9][a-zA-Z0-9-]+' |
|
| 14 |
- |
|
| 15 |
-# https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work |
|
| 16 |
-dcoPrefix='Signed-off-by:' |
|
| 17 |
-dcoRegex="^(Docker-DCO-1.1-)?$dcoPrefix ([^<]+) <([^<>@]+@[^<>]+)>( \\(github: ($githubUsernameRegex)\\))?$" |
|
| 18 |
- |
|
| 19 |
-check_dco() {
|
|
| 20 |
- grep -qE "$dcoRegex" |
|
| 21 |
-} |
|
| 22 |
- |
|
| 23 |
-if [ $adds -eq 0 -a $dels -eq 0 ]; then |
|
| 24 |
- echo '0 adds, 0 deletions; nothing to validate! :)' |
|
| 25 |
-elif [ -z "$notDocs" -a $adds -le 1 -a $dels -le 1 ]; then |
|
| 26 |
- echo 'Congratulations! DCO small-patch-exception material!' |
|
| 27 |
-else |
|
| 28 |
- commits=( $(validate_log --format='format:%H%n') ) |
|
| 29 |
- badCommits=() |
|
| 30 |
- for commit in "${commits[@]}"; do
|
|
| 31 |
- if [ -z "$(git log -1 --format='format:' --name-status "$commit")" ]; then |
|
| 32 |
- # no content (ie, Merge commit, etc) |
|
| 33 |
- continue |
|
| 34 |
- fi |
|
| 35 |
- if ! git log -1 --format='format:%B' "$commit" | check_dco; then |
|
| 36 |
- badCommits+=( "$commit" ) |
|
| 37 |
- fi |
|
| 38 |
- done |
|
| 39 |
- if [ ${#badCommits[@]} -eq 0 ]; then
|
|
| 40 |
- echo "Congratulations! All commits are properly signed with the DCO!" |
|
| 41 |
- else |
|
| 42 |
- {
|
|
| 43 |
- echo "These commits do not have a proper '$dcoPrefix' marker:" |
|
| 44 |
- for commit in "${badCommits[@]}"; do
|
|
| 45 |
- echo " - $commit" |
|
| 46 |
- done |
|
| 47 |
- echo |
|
| 48 |
- echo 'Please amend each commit to include a properly formatted DCO marker.' |
|
| 49 |
- echo |
|
| 50 |
- echo 'Visit the following URL for information about the Docker DCO:' |
|
| 51 |
- echo ' https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work' |
|
| 52 |
- echo |
|
| 53 |
- } >&2 |
|
| 54 |
- false |
|
| 55 |
- fi |
|
| 56 |
-fi |
| 57 | 1 |
deleted file mode 100644 |
| ... | ... |
@@ -1,30 +0,0 @@ |
| 1 |
-#!/bin/bash |
|
| 2 |
- |
|
| 3 |
-source "$(dirname "$BASH_SOURCE")/.validate" |
|
| 4 |
- |
|
| 5 |
-IFS=$'\n' |
|
| 6 |
-files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/' || true) ) |
|
| 7 |
-unset IFS |
|
| 8 |
- |
|
| 9 |
-badFiles=() |
|
| 10 |
-for f in "${files[@]}"; do
|
|
| 11 |
- # we use "git show" here to validate that what's committed is formatted |
|
| 12 |
- if [ "$(git show "$VALIDATE_HEAD:$f" | gofmt -s -l)" ]; then |
|
| 13 |
- badFiles+=( "$f" ) |
|
| 14 |
- fi |
|
| 15 |
-done |
|
| 16 |
- |
|
| 17 |
-if [ ${#badFiles[@]} -eq 0 ]; then
|
|
| 18 |
- echo 'Congratulations! All Go source files are properly formatted.' |
|
| 19 |
-else |
|
| 20 |
- {
|
|
| 21 |
- echo "These files are not properly gofmt'd:" |
|
| 22 |
- for f in "${badFiles[@]}"; do
|
|
| 23 |
- echo " - $f" |
|
| 24 |
- done |
|
| 25 |
- echo |
|
| 26 |
- echo 'Please reformat the above files using "gofmt -s -w" and commit the result.' |
|
| 27 |
- echo |
|
| 28 |
- } >&2 |
|
| 29 |
- false |
|
| 30 |
-fi |
| 31 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,389 +0,0 @@ |
| 1 |
-#!/usr/bin/env bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-# This script looks for bundles built by make.sh, and releases them on a |
|
| 5 |
-# public S3 bucket. |
|
| 6 |
-# |
|
| 7 |
-# Bundles should be available for the VERSION string passed as argument. |
|
| 8 |
-# |
|
| 9 |
-# The correct way to call this script is inside a container built by the |
|
| 10 |
-# official Dockerfile at the root of the Docker source code. The Dockerfile, |
|
| 11 |
-# make.sh and release.sh should all be from the same source code revision. |
|
| 12 |
- |
|
| 13 |
-set -o pipefail |
|
| 14 |
- |
|
| 15 |
-# Print a usage message and exit. |
|
| 16 |
-usage() {
|
|
| 17 |
- cat >&2 <<'EOF' |
|
| 18 |
-To run, I need: |
|
| 19 |
-- to be in a container generated by the Dockerfile at the top of the Docker |
|
| 20 |
- repository; |
|
| 21 |
-- to be provided with the name of an S3 bucket, in environment variable |
|
| 22 |
- AWS_S3_BUCKET; |
|
| 23 |
-- to be provided with AWS credentials for this S3 bucket, in environment |
|
| 24 |
- variables AWS_ACCESS_KEY and AWS_SECRET_KEY; |
|
| 25 |
-- the passphrase to unlock the GPG key which will sign the deb packages |
|
| 26 |
- (passed as environment variable GPG_PASSPHRASE); |
|
| 27 |
-- a generous amount of good will and nice manners. |
|
| 28 |
-The canonical way to run me is to run the image produced by the Dockerfile: e.g.:" |
|
| 29 |
- |
|
| 30 |
-docker run -e AWS_S3_BUCKET=test.docker.com \ |
|
| 31 |
- -e AWS_ACCESS_KEY=... \ |
|
| 32 |
- -e AWS_SECRET_KEY=... \ |
|
| 33 |
- -e GPG_PASSPHRASE=... \ |
|
| 34 |
- -i -t --privileged \ |
|
| 35 |
- docker ./hack/release.sh |
|
| 36 |
-EOF |
|
| 37 |
- exit 1 |
|
| 38 |
-} |
|
| 39 |
- |
|
| 40 |
-[ "$AWS_S3_BUCKET" ] || usage |
|
| 41 |
-[ "$AWS_ACCESS_KEY" ] || usage |
|
| 42 |
-[ "$AWS_SECRET_KEY" ] || usage |
|
| 43 |
-[ "$GPG_PASSPHRASE" ] || usage |
|
| 44 |
-[ -d /go/src/github.com/docker/docker ] || usage |
|
| 45 |
-cd /go/src/github.com/docker/docker |
|
| 46 |
-[ -x hack/make.sh ] || usage |
|
| 47 |
- |
|
| 48 |
-RELEASE_BUNDLES=( |
|
| 49 |
- binary |
|
| 50 |
- cross |
|
| 51 |
- tgz |
|
| 52 |
- ubuntu |
|
| 53 |
-) |
|
| 54 |
- |
|
| 55 |
-if [ "$1" != '--release-regardless-of-test-failure' ]; then |
|
| 56 |
- RELEASE_BUNDLES=( |
|
| 57 |
- test-unit test-integration |
|
| 58 |
- "${RELEASE_BUNDLES[@]}"
|
|
| 59 |
- test-integration-cli |
|
| 60 |
- ) |
|
| 61 |
-fi |
|
| 62 |
- |
|
| 63 |
-VERSION=$(cat VERSION) |
|
| 64 |
-BUCKET=$AWS_S3_BUCKET |
|
| 65 |
- |
|
| 66 |
-# These are the 2 keys we've used to sign the deb's |
|
| 67 |
-# release (get.docker.com) |
|
| 68 |
-# GPG_KEY="36A1D7869245C8950F966E92D8576A8BA88D21E9" |
|
| 69 |
-# test (test.docker.com) |
|
| 70 |
-# GPG_KEY="740B314AE3941731B942C66ADF4FD13717AAD7D6" |
|
| 71 |
- |
|
| 72 |
-setup_s3() {
|
|
| 73 |
- # Try creating the bucket. Ignore errors (it might already exist). |
|
| 74 |
- s3cmd mb s3://$BUCKET 2>/dev/null || true |
|
| 75 |
- # Check access to the bucket. |
|
| 76 |
- # s3cmd has no useful exit status, so we cannot check that. |
|
| 77 |
- # Instead, we check if it outputs anything on standard output. |
|
| 78 |
- # (When there are problems, it uses standard error instead.) |
|
| 79 |
- s3cmd info s3://$BUCKET | grep -q . |
|
| 80 |
- # Make the bucket accessible through website endpoints. |
|
| 81 |
- s3cmd ws-create --ws-index index --ws-error error s3://$BUCKET |
|
| 82 |
-} |
|
| 83 |
- |
|
| 84 |
-# write_to_s3 uploads the contents of standard input to the specified S3 url. |
|
| 85 |
-write_to_s3() {
|
|
| 86 |
- DEST=$1 |
|
| 87 |
- F=`mktemp` |
|
| 88 |
- cat > $F |
|
| 89 |
- s3cmd --acl-public --mime-type='text/plain' put $F $DEST |
|
| 90 |
- rm -f $F |
|
| 91 |
-} |
|
| 92 |
- |
|
| 93 |
-s3_url() {
|
|
| 94 |
- case "$BUCKET" in |
|
| 95 |
- get.docker.com|test.docker.com) |
|
| 96 |
- echo "https://$BUCKET" |
|
| 97 |
- ;; |
|
| 98 |
- *) |
|
| 99 |
- s3cmd ws-info s3://$BUCKET | awk -v 'FS=: +' '/http:\/\/'$BUCKET'/ { gsub(/\/+$/, "", $2); print $2 }'
|
|
| 100 |
- ;; |
|
| 101 |
- esac |
|
| 102 |
-} |
|
| 103 |
- |
|
| 104 |
-build_all() {
|
|
| 105 |
- if ! ./hack/make.sh "${RELEASE_BUNDLES[@]}"; then
|
|
| 106 |
- echo >&2 |
|
| 107 |
- echo >&2 'The build or tests appear to have failed.' |
|
| 108 |
- echo >&2 |
|
| 109 |
- echo >&2 'You, as the release maintainer, now have a couple options:' |
|
| 110 |
- echo >&2 '- delay release and fix issues' |
|
| 111 |
- echo >&2 '- delay release and fix issues' |
|
| 112 |
- echo >&2 '- did we mention how important this is? issues need fixing :)' |
|
| 113 |
- echo >&2 |
|
| 114 |
- echo >&2 'As a final LAST RESORT, you (because only you, the release maintainer,' |
|
| 115 |
- echo >&2 ' really knows all the hairy problems at hand with the current release' |
|
| 116 |
- echo >&2 ' issues) may bypass this checking by running this script again with the' |
|
| 117 |
- echo >&2 ' single argument of "--release-regardless-of-test-failure", which will skip' |
|
| 118 |
- echo >&2 ' running the test suite, and will only build the binaries and packages. Please' |
|
| 119 |
- echo >&2 ' avoid using this if at all possible.' |
|
| 120 |
- echo >&2 |
|
| 121 |
- echo >&2 'Regardless, we cannot stress enough the scarcity with which this bypass' |
|
| 122 |
- echo >&2 ' should be used. If there are release issues, we should always err on the' |
|
| 123 |
- echo >&2 ' side of caution.' |
|
| 124 |
- echo >&2 |
|
| 125 |
- exit 1 |
|
| 126 |
- fi |
|
| 127 |
-} |
|
| 128 |
- |
|
| 129 |
-upload_release_build() {
|
|
| 130 |
- src="$1" |
|
| 131 |
- dst="$2" |
|
| 132 |
- latest="$3" |
|
| 133 |
- |
|
| 134 |
- echo |
|
| 135 |
- echo "Uploading $src" |
|
| 136 |
- echo " to $dst" |
|
| 137 |
- echo |
|
| 138 |
- s3cmd --follow-symlinks --preserve --acl-public put "$src" "$dst" |
|
| 139 |
- if [ "$latest" ]; then |
|
| 140 |
- echo |
|
| 141 |
- echo "Copying to $latest" |
|
| 142 |
- echo |
|
| 143 |
- s3cmd --acl-public cp "$dst" "$latest" |
|
| 144 |
- fi |
|
| 145 |
- |
|
| 146 |
- # get hash files too (see hash_files() in hack/make.sh) |
|
| 147 |
- for hashAlgo in md5 sha256; do |
|
| 148 |
- if [ -e "$src.$hashAlgo" ]; then |
|
| 149 |
- echo |
|
| 150 |
- echo "Uploading $src.$hashAlgo" |
|
| 151 |
- echo " to $dst.$hashAlgo" |
|
| 152 |
- echo |
|
| 153 |
- s3cmd --follow-symlinks --preserve --acl-public --mime-type='text/plain' put "$src.$hashAlgo" "$dst.$hashAlgo" |
|
| 154 |
- if [ "$latest" ]; then |
|
| 155 |
- echo |
|
| 156 |
- echo "Copying to $latest.$hashAlgo" |
|
| 157 |
- echo |
|
| 158 |
- s3cmd --acl-public cp "$dst.$hashAlgo" "$latest.$hashAlgo" |
|
| 159 |
- fi |
|
| 160 |
- fi |
|
| 161 |
- done |
|
| 162 |
-} |
|
| 163 |
- |
|
| 164 |
-release_build() {
|
|
| 165 |
- GOOS=$1 |
|
| 166 |
- GOARCH=$2 |
|
| 167 |
- |
|
| 168 |
- binDir=bundles/$VERSION/cross/$GOOS/$GOARCH |
|
| 169 |
- tgzDir=bundles/$VERSION/tgz/$GOOS/$GOARCH |
|
| 170 |
- binary=docker-$VERSION |
|
| 171 |
- tgz=docker-$VERSION.tgz |
|
| 172 |
- |
|
| 173 |
- latestBase= |
|
| 174 |
- if [ -z "$NOLATEST" ]; then |
|
| 175 |
- latestBase=docker-latest |
|
| 176 |
- fi |
|
| 177 |
- |
|
| 178 |
- # we need to map our GOOS and GOARCH to uname values |
|
| 179 |
- # see https://en.wikipedia.org/wiki/Uname |
|
| 180 |
- # ie, GOOS=linux -> "uname -s"=Linux |
|
| 181 |
- |
|
| 182 |
- s3Os=$GOOS |
|
| 183 |
- case "$s3Os" in |
|
| 184 |
- darwin) |
|
| 185 |
- s3Os=Darwin |
|
| 186 |
- ;; |
|
| 187 |
- freebsd) |
|
| 188 |
- s3Os=FreeBSD |
|
| 189 |
- ;; |
|
| 190 |
- linux) |
|
| 191 |
- s3Os=Linux |
|
| 192 |
- ;; |
|
| 193 |
- *) |
|
| 194 |
- echo >&2 "error: can't convert $s3Os to an appropriate value for 'uname -s'" |
|
| 195 |
- exit 1 |
|
| 196 |
- ;; |
|
| 197 |
- esac |
|
| 198 |
- |
|
| 199 |
- s3Arch=$GOARCH |
|
| 200 |
- case "$s3Arch" in |
|
| 201 |
- amd64) |
|
| 202 |
- s3Arch=x86_64 |
|
| 203 |
- ;; |
|
| 204 |
- 386) |
|
| 205 |
- s3Arch=i386 |
|
| 206 |
- ;; |
|
| 207 |
- arm) |
|
| 208 |
- s3Arch=armel |
|
| 209 |
- # someday, we might potentially support mutliple GOARM values, in which case we might get armhf here too |
|
| 210 |
- ;; |
|
| 211 |
- *) |
|
| 212 |
- echo >&2 "error: can't convert $s3Arch to an appropriate value for 'uname -m'" |
|
| 213 |
- exit 1 |
|
| 214 |
- ;; |
|
| 215 |
- esac |
|
| 216 |
- |
|
| 217 |
- s3Dir=s3://$BUCKET/builds/$s3Os/$s3Arch |
|
| 218 |
- latest= |
|
| 219 |
- latestTgz= |
|
| 220 |
- if [ "$latestBase" ]; then |
|
| 221 |
- latest="$s3Dir/$latestBase" |
|
| 222 |
- latestTgz="$s3Dir/$latestBase.tgz" |
|
| 223 |
- fi |
|
| 224 |
- |
|
| 225 |
- if [ ! -x "$binDir/$binary" ]; then |
|
| 226 |
- echo >&2 "error: can't find $binDir/$binary - was it compiled properly?" |
|
| 227 |
- exit 1 |
|
| 228 |
- fi |
|
| 229 |
- if [ ! -f "$tgzDir/$tgz" ]; then |
|
| 230 |
- echo >&2 "error: can't find $tgzDir/$tgz - was it packaged properly?" |
|
| 231 |
- exit 1 |
|
| 232 |
- fi |
|
| 233 |
- |
|
| 234 |
- upload_release_build "$binDir/$binary" "$s3Dir/$binary" "$latest" |
|
| 235 |
- upload_release_build "$tgzDir/$tgz" "$s3Dir/$tgz" "$latestTgz" |
|
| 236 |
-} |
|
| 237 |
- |
|
| 238 |
-# Upload the 'ubuntu' bundle to S3: |
|
| 239 |
-# 1. A full APT repository is published at $BUCKET/ubuntu/ |
|
| 240 |
-# 2. Instructions for using the APT repository are uploaded at $BUCKET/ubuntu/index |
|
| 241 |
-release_ubuntu() {
|
|
| 242 |
- [ -e bundles/$VERSION/ubuntu ] || {
|
|
| 243 |
- echo >&2 './hack/make.sh must be run before release_ubuntu' |
|
| 244 |
- exit 1 |
|
| 245 |
- } |
|
| 246 |
- |
|
| 247 |
- # Sign our packages |
|
| 248 |
- dpkg-sig -g "--passphrase $GPG_PASSPHRASE" -k releasedocker \ |
|
| 249 |
- --sign builder bundles/$VERSION/ubuntu/*.deb |
|
| 250 |
- |
|
| 251 |
- # Setup the APT repo |
|
| 252 |
- APTDIR=bundles/$VERSION/ubuntu/apt |
|
| 253 |
- mkdir -p $APTDIR/conf $APTDIR/db |
|
| 254 |
- s3cmd sync s3://$BUCKET/ubuntu/db/ $APTDIR/db/ || true |
|
| 255 |
- cat > $APTDIR/conf/distributions <<EOF |
|
| 256 |
-Codename: docker |
|
| 257 |
-Components: main |
|
| 258 |
-Architectures: amd64 i386 |
|
| 259 |
-EOF |
|
| 260 |
- |
|
| 261 |
- # Add the DEB package to the APT repo |
|
| 262 |
- DEBFILE=bundles/$VERSION/ubuntu/lxc-docker*.deb |
|
| 263 |
- reprepro -b $APTDIR includedeb docker $DEBFILE |
|
| 264 |
- |
|
| 265 |
- # Sign |
|
| 266 |
- for F in $(find $APTDIR -name Release); do |
|
| 267 |
- gpg -u releasedocker --passphrase $GPG_PASSPHRASE \ |
|
| 268 |
- --armor --sign --detach-sign \ |
|
| 269 |
- --output $F.gpg $F |
|
| 270 |
- done |
|
| 271 |
- |
|
| 272 |
- # Upload keys |
|
| 273 |
- s3cmd sync $HOME/.gnupg/ s3://$BUCKET/ubuntu/.gnupg/ |
|
| 274 |
- gpg --armor --export releasedocker > bundles/$VERSION/ubuntu/gpg |
|
| 275 |
- s3cmd --acl-public put bundles/$VERSION/ubuntu/gpg s3://$BUCKET/gpg |
|
| 276 |
- |
|
| 277 |
- local gpgFingerprint=36A1D7869245C8950F966E92D8576A8BA88D21E9 |
|
| 278 |
- if [[ $BUCKET == test* ]]; then |
|
| 279 |
- gpgFingerprint=740B314AE3941731B942C66ADF4FD13717AAD7D6 |
|
| 280 |
- fi |
|
| 281 |
- |
|
| 282 |
- # Upload repo |
|
| 283 |
- s3cmd --acl-public sync $APTDIR/ s3://$BUCKET/ubuntu/ |
|
| 284 |
- cat <<EOF | write_to_s3 s3://$BUCKET/ubuntu/index |
|
| 285 |
-# Check that HTTPS transport is available to APT |
|
| 286 |
-if [ ! -e /usr/lib/apt/methods/https ]; then |
|
| 287 |
- apt-get update |
|
| 288 |
- apt-get install -y apt-transport-https |
|
| 289 |
-fi |
|
| 290 |
- |
|
| 291 |
-# Add the repository to your APT sources |
|
| 292 |
-echo deb $(s3_url)/ubuntu docker main > /etc/apt/sources.list.d/docker.list |
|
| 293 |
- |
|
| 294 |
-# Then import the repository key |
|
| 295 |
-apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys $gpgFingerprint |
|
| 296 |
- |
|
| 297 |
-# Install docker |
|
| 298 |
-apt-get update |
|
| 299 |
-apt-get install -y lxc-docker |
|
| 300 |
- |
|
| 301 |
-# |
|
| 302 |
-# Alternatively, just use the curl-able install.sh script provided at $(s3_url) |
|
| 303 |
-# |
|
| 304 |
-EOF |
|
| 305 |
- |
|
| 306 |
- # Add redirect at /ubuntu/info for URL-backwards-compatibility |
|
| 307 |
- rm -rf /tmp/emptyfile && touch /tmp/emptyfile |
|
| 308 |
- s3cmd --acl-public --add-header='x-amz-website-redirect-location:/ubuntu/' --mime-type='text/plain' put /tmp/emptyfile s3://$BUCKET/ubuntu/info |
|
| 309 |
- |
|
| 310 |
- echo "APT repository uploaded. Instructions available at $(s3_url)/ubuntu" |
|
| 311 |
-} |
|
| 312 |
- |
|
| 313 |
-# Upload binaries and tgz files to S3 |
|
| 314 |
-release_binaries() {
|
|
| 315 |
- [ -e bundles/$VERSION/cross/linux/amd64/docker-$VERSION ] || {
|
|
| 316 |
- echo >&2 './hack/make.sh must be run before release_binaries' |
|
| 317 |
- exit 1 |
|
| 318 |
- } |
|
| 319 |
- |
|
| 320 |
- for d in bundles/$VERSION/cross/*/*; do |
|
| 321 |
- GOARCH="$(basename "$d")" |
|
| 322 |
- GOOS="$(basename "$(dirname "$d")")" |
|
| 323 |
- release_build "$GOOS" "$GOARCH" |
|
| 324 |
- done |
|
| 325 |
- |
|
| 326 |
- # TODO create redirect from builds/*/i686 to builds/*/i386 |
|
| 327 |
- |
|
| 328 |
- cat <<EOF | write_to_s3 s3://$BUCKET/builds/index |
|
| 329 |
-# To install, run the following command as root: |
|
| 330 |
-curl -sSL -O $(s3_url)/builds/Linux/x86_64/docker-$VERSION && chmod +x docker-$VERSION && sudo mv docker-$VERSION /usr/local/bin/docker |
|
| 331 |
-# Then start docker in daemon mode: |
|
| 332 |
-sudo /usr/local/bin/docker -d |
|
| 333 |
-EOF |
|
| 334 |
- |
|
| 335 |
- # Add redirect at /builds/info for URL-backwards-compatibility |
|
| 336 |
- rm -rf /tmp/emptyfile && touch /tmp/emptyfile |
|
| 337 |
- s3cmd --acl-public --add-header='x-amz-website-redirect-location:/builds/' --mime-type='text/plain' put /tmp/emptyfile s3://$BUCKET/builds/info |
|
| 338 |
- |
|
| 339 |
- if [ -z "$NOLATEST" ]; then |
|
| 340 |
- echo "Advertising $VERSION on $BUCKET as most recent version" |
|
| 341 |
- echo $VERSION | write_to_s3 s3://$BUCKET/latest |
|
| 342 |
- fi |
|
| 343 |
-} |
|
| 344 |
- |
|
| 345 |
-# Upload the index script |
|
| 346 |
-release_index() {
|
|
| 347 |
- sed "s,url='https://get.docker.com/',url='$(s3_url)/'," hack/install.sh | write_to_s3 s3://$BUCKET/index |
|
| 348 |
-} |
|
| 349 |
- |
|
| 350 |
-release_test() {
|
|
| 351 |
- if [ -e "bundles/$VERSION/test" ]; then |
|
| 352 |
- s3cmd --acl-public sync bundles/$VERSION/test/ s3://$BUCKET/test/ |
|
| 353 |
- fi |
|
| 354 |
-} |
|
| 355 |
- |
|
| 356 |
-setup_gpg() {
|
|
| 357 |
- # Make sure that we have our keys |
|
| 358 |
- mkdir -p $HOME/.gnupg/ |
|
| 359 |
- s3cmd sync s3://$BUCKET/ubuntu/.gnupg/ $HOME/.gnupg/ || true |
|
| 360 |
- gpg --list-keys releasedocker >/dev/null || {
|
|
| 361 |
- gpg --gen-key --batch <<EOF |
|
| 362 |
-Key-Type: RSA |
|
| 363 |
-Key-Length: 4096 |
|
| 364 |
-Passphrase: $GPG_PASSPHRASE |
|
| 365 |
-Name-Real: Docker Release Tool |
|
| 366 |
-Name-Email: docker@docker.com |
|
| 367 |
-Name-Comment: releasedocker |
|
| 368 |
-Expire-Date: 0 |
|
| 369 |
-%commit |
|
| 370 |
-EOF |
|
| 371 |
- } |
|
| 372 |
-} |
|
| 373 |
- |
|
| 374 |
-main() {
|
|
| 375 |
- build_all |
|
| 376 |
- setup_s3 |
|
| 377 |
- setup_gpg |
|
| 378 |
- release_binaries |
|
| 379 |
- release_ubuntu |
|
| 380 |
- release_index |
|
| 381 |
- release_test |
|
| 382 |
-} |
|
| 383 |
- |
|
| 384 |
-main |
|
| 385 |
- |
|
| 386 |
-echo |
|
| 387 |
-echo |
|
| 388 |
-echo "Release complete; see $(s3_url)" |
|
| 389 |
-echo |
| 390 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,22 +0,0 @@ |
| 1 |
-#!/usr/bin/env bash |
|
| 2 |
- |
|
| 3 |
-## Run this script from the root of the docker repository |
|
| 4 |
-## to query project stats useful to the maintainers. |
|
| 5 |
-## You will need to install `pulls` and `issues` from |
|
| 6 |
-## http://github.com/crosbymichael/pulls |
|
| 7 |
- |
|
| 8 |
-set -e |
|
| 9 |
- |
|
| 10 |
-echo -n "Open pulls: " |
|
| 11 |
-PULLS=$(pulls | wc -l); let PULLS=$PULLS-1 |
|
| 12 |
-echo $PULLS |
|
| 13 |
- |
|
| 14 |
-echo -n "Pulls alru: " |
|
| 15 |
-pulls alru |
|
| 16 |
- |
|
| 17 |
-echo -n "Open issues: " |
|
| 18 |
-ISSUES=$(issues list | wc -l); let ISSUES=$ISSUES-1 |
|
| 19 |
-echo $ISSUES |
|
| 20 |
- |
|
| 21 |
-echo -n "Issues alru: " |
|
| 22 |
-issues alru |
| 23 | 1 |
deleted file mode 100755 |
| ... | ... |
@@ -1,73 +0,0 @@ |
| 1 |
-#!/usr/bin/env bash |
|
| 2 |
-set -e |
|
| 3 |
- |
|
| 4 |
-cd "$(dirname "$BASH_SOURCE")/.." |
|
| 5 |
- |
|
| 6 |
-# Downloads dependencies into vendor/ directory |
|
| 7 |
-mkdir -p vendor |
|
| 8 |
-cd vendor |
|
| 9 |
- |
|
| 10 |
-clone() {
|
|
| 11 |
- vcs=$1 |
|
| 12 |
- pkg=$2 |
|
| 13 |
- rev=$3 |
|
| 14 |
- |
|
| 15 |
- pkg_url=https://$pkg |
|
| 16 |
- target_dir=src/$pkg |
|
| 17 |
- |
|
| 18 |
- echo -n "$pkg @ $rev: " |
|
| 19 |
- |
|
| 20 |
- if [ -d $target_dir ]; then |
|
| 21 |
- echo -n 'rm old, ' |
|
| 22 |
- rm -fr $target_dir |
|
| 23 |
- fi |
|
| 24 |
- |
|
| 25 |
- echo -n 'clone, ' |
|
| 26 |
- case $vcs in |
|
| 27 |
- git) |
|
| 28 |
- git clone --quiet --no-checkout $pkg_url $target_dir |
|
| 29 |
- ( cd $target_dir && git reset --quiet --hard $rev ) |
|
| 30 |
- ;; |
|
| 31 |
- hg) |
|
| 32 |
- hg clone --quiet --updaterev $rev $pkg_url $target_dir |
|
| 33 |
- ;; |
|
| 34 |
- esac |
|
| 35 |
- |
|
| 36 |
- echo -n 'rm VCS, ' |
|
| 37 |
- ( cd $target_dir && rm -rf .{git,hg} )
|
|
| 38 |
- |
|
| 39 |
- echo done |
|
| 40 |
-} |
|
| 41 |
- |
|
| 42 |
-clone git github.com/kr/pty 67e2db24c8 |
|
| 43 |
- |
|
| 44 |
-clone git github.com/gorilla/context 14f550f51a |
|
| 45 |
- |
|
| 46 |
-clone git github.com/gorilla/mux 136d54f81f |
|
| 47 |
- |
|
| 48 |
-clone git github.com/tchap/go-patricia v1.0.1 |
|
| 49 |
- |
|
| 50 |
-clone hg code.google.com/p/go.net 84a4013f96e0 |
|
| 51 |
- |
|
| 52 |
-clone hg code.google.com/p/gosqlite 74691fb6f837 |
|
| 53 |
- |
|
| 54 |
-clone git github.com/docker/libtrust d273ef2565ca |
|
| 55 |
- |
|
| 56 |
-clone git github.com/Sirupsen/logrus v0.6.0 |
|
| 57 |
- |
|
| 58 |
-# get Go tip's archive/tar, for xattr support and improved performance |
|
| 59 |
-# TODO after Go 1.4 drops, bump our minimum supported version and drop this vendored dep |
|
| 60 |
-if [ "$1" = '--go' ]; then |
|
| 61 |
- # Go takes forever and a half to clone, so we only redownload it when explicitly requested via the "--go" flag to this script. |
|
| 62 |
- clone hg code.google.com/p/go 1b17b3426e3c |
|
| 63 |
- mv src/code.google.com/p/go/src/pkg/archive/tar tmp-tar |
|
| 64 |
- rm -rf src/code.google.com/p/go |
|
| 65 |
- mkdir -p src/code.google.com/p/go/src/pkg/archive |
|
| 66 |
- mv tmp-tar src/code.google.com/p/go/src/pkg/archive/tar |
|
| 67 |
-fi |
|
| 68 |
- |
|
| 69 |
-clone git github.com/docker/libcontainer 4ae31b6ceb2c2557c9f05f42da61b0b808faa5a4 |
|
| 70 |
-# see src/github.com/docker/libcontainer/update-vendor.sh which is the "source of truth" for libcontainer deps (just like this file) |
|
| 71 |
-rm -rf src/github.com/docker/libcontainer/vendor |
|
| 72 |
-eval "$(grep '^clone ' src/github.com/docker/libcontainer/update-vendor.sh | grep -v 'github.com/codegangsta/cli')" |
|
| 73 |
-# we exclude "github.com/codegangsta/cli" here because it's only needed for "nsinit", which Docker doesn't include |
| 1 | 2 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,17 @@ |
| 0 |
+# Docker Governance Advisory Board Meetings |
|
| 1 |
+ |
|
| 2 |
+In the spirit of openness, Docker created a Governance Advisory Board, and committed to make all materials and notes from the meetings of this group public. |
|
| 3 |
+All output from the meetings should be considered proposals only, and are subject to the review and approval of the community and the project leadership. |
|
| 4 |
+ |
|
| 5 |
+The materials from the first Docker Governance Advisory Board meeting, held on October 28, 2014, are available at |
|
| 6 |
+[Google Docs Folder](http://goo.gl/Alfj8r) |
|
| 7 |
+ |
|
| 8 |
+These include: |
|
| 9 |
+ |
|
| 10 |
+* First Meeting Notes |
|
| 11 |
+* DGAB Charter |
|
| 12 |
+* Presentation 1: Introductory Presentation, including State of The Project |
|
| 13 |
+* Presentation 2: Overall Contribution Structure/Docker Project Core Proposal |
|
| 14 |
+* Presentation 3: Long Term Roadmap/Statement of Direction |
|
| 15 |
+ |
|
| 16 |
+ |
| 0 | 4 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,130 @@ |
| 0 |
+# The Docker Maintainer manual |
|
| 1 |
+ |
|
| 2 |
+## Introduction |
|
| 3 |
+ |
|
| 4 |
+Dear maintainer. Thank you for investing the time and energy to help |
|
| 5 |
+make Docker as useful as possible. Maintaining a project is difficult, |
|
| 6 |
+sometimes unrewarding work. Sure, you will get to contribute cool |
|
| 7 |
+features to the project. But most of your time will be spent reviewing, |
|
| 8 |
+cleaning up, documenting, answering questions, and justifying design |
|
| 9 |
+decisions - while everyone has all the fun! But remember - the quality |
|
| 10 |
+of the maintainers' work is what distinguishes the good projects from |
|
| 11 |
+the great. So please be proud of your work, even the unglamourous parts, |
|
| 12 |
+and encourage a culture of appreciation and respect for *every* aspect |
|
| 13 |
+of improving the project - not just the hot new features. |
|
| 14 |
+ |
|
| 15 |
+This document is a manual for maintainers old and new. It explains what |
|
| 16 |
+is expected of maintainers, how they should work, and what tools are |
|
| 17 |
+available to them. |
|
| 18 |
+ |
|
| 19 |
+This is a living document - if you see something out of date or missing, |
|
| 20 |
+speak up! |
|
| 21 |
+ |
|
| 22 |
+## What is a maintainer's responsibility? |
|
| 23 |
+ |
|
| 24 |
+It is every maintainer's responsibility to: |
|
| 25 |
+ |
|
| 26 |
+1. Expose a clear road map for improving their component. |
|
| 27 |
+2. Deliver prompt feedback and decisions on pull requests. |
|
| 28 |
+3. Be available to anyone with questions, bug reports, criticism etc. |
|
| 29 |
+ on their component. This includes IRC, GitHub requests and the mailing |
|
| 30 |
+ list. |
|
| 31 |
+4. Make sure their component respects the philosophy, design and |
|
| 32 |
+ road map of the project. |
|
| 33 |
+ |
|
| 34 |
+## How are decisions made? |
|
| 35 |
+ |
|
| 36 |
+Short answer: with pull requests to the Docker repository. |
|
| 37 |
+ |
|
| 38 |
+Docker is an open-source project with an open design philosophy. This |
|
| 39 |
+means that the repository is the source of truth for EVERY aspect of the |
|
| 40 |
+project, including its philosophy, design, road map, and APIs. *If it's |
|
| 41 |
+part of the project, it's in the repo. If it's in the repo, it's part of |
|
| 42 |
+the project.* |
|
| 43 |
+ |
|
| 44 |
+As a result, all decisions can be expressed as changes to the |
|
| 45 |
+repository. An implementation change is a change to the source code. An |
|
| 46 |
+API change is a change to the API specification. A philosophy change is |
|
| 47 |
+a change to the philosophy manifesto, and so on. |
|
| 48 |
+ |
|
| 49 |
+All decisions affecting Docker, big and small, follow the same 3 steps: |
|
| 50 |
+ |
|
| 51 |
+* Step 1: Open a pull request. Anyone can do this. |
|
| 52 |
+ |
|
| 53 |
+* Step 2: Discuss the pull request. Anyone can do this. |
|
| 54 |
+ |
|
| 55 |
+* Step 3: Accept (`LGTM`) or refuse a pull request. The relevant maintainers do |
|
| 56 |
+this (see below "Who decides what?") |
|
| 57 |
+ + Accepting pull requests |
|
| 58 |
+ - If the pull request appears to be ready to merge, give it a `LGTM`, which |
|
| 59 |
+ stands for "Looks Good To Me". |
|
| 60 |
+ - If the pull request has some small problems that need to be changed, make |
|
| 61 |
+ a comment adressing the issues. |
|
| 62 |
+ - If the changes needed to a PR are small, you can add a "LGTM once the |
|
| 63 |
+ following comments are adressed..." this will reduce needless back and |
|
| 64 |
+ forth. |
|
| 65 |
+ - If the PR only needs a few changes before being merged, any MAINTAINER can |
|
| 66 |
+ make a replacement PR that incorporates the existing commits and fixes the |
|
| 67 |
+ problems before a fast track merge. |
|
| 68 |
+ + Closing pull requests |
|
| 69 |
+ - If a PR appears to be abandoned, after having attempted to contact the |
|
| 70 |
+ original contributor, then a replacement PR may be made. Once the |
|
| 71 |
+ replacement PR is made, any contributor may close the original one. |
|
| 72 |
+ - If you are not sure if the pull request implements a good feature or you |
|
| 73 |
+ do not understand the purpose of the PR, ask the contributor to provide |
|
| 74 |
+ more documentation. If the contributor is not able to adequately explain |
|
| 75 |
+ the purpose of the PR, the PR may be closed by any MAINTAINER. |
|
| 76 |
+ - If a MAINTAINER feels that the pull request is sufficiently architecturally |
|
| 77 |
+ flawed, or if the pull request needs significantly more design discussion |
|
| 78 |
+ before being considered, the MAINTAINER should close the pull request with |
|
| 79 |
+ a short explanation of what discussion still needs to be had. It is |
|
| 80 |
+ important not to leave such pull requests open, as this will waste both the |
|
| 81 |
+ MAINTAINER's time and the contributor's time. It is not good to string a |
|
| 82 |
+ contributor on for weeks or months, having them make many changes to a PR |
|
| 83 |
+ that will eventually be rejected. |
|
| 84 |
+ |
|
| 85 |
+## Who decides what? |
|
| 86 |
+ |
|
| 87 |
+All decisions are pull requests, and the relevant maintainers make |
|
| 88 |
+decisions by accepting or refusing pull requests. Review and acceptance |
|
| 89 |
+by anyone is denoted by adding a comment in the pull request: `LGTM`. |
|
| 90 |
+However, only currently listed `MAINTAINERS` are counted towards the |
|
| 91 |
+required majority. |
|
| 92 |
+ |
|
| 93 |
+Docker follows the timeless, highly efficient and totally unfair system |
|
| 94 |
+known as [Benevolent dictator for |
|
| 95 |
+life](http://en.wikipedia.org/wiki/Benevolent_Dictator_for_Life), with |
|
| 96 |
+yours truly, Solomon Hykes, in the role of BDFL. This means that all |
|
| 97 |
+decisions are made, by default, by Solomon. Since making every decision |
|
| 98 |
+myself would be highly un-scalable, in practice decisions are spread |
|
| 99 |
+across multiple maintainers. |
|
| 100 |
+ |
|
| 101 |
+The relevant maintainers for a pull request can be worked out in 2 steps: |
|
| 102 |
+ |
|
| 103 |
+* Step 1: Determine the subdirectories affected by the pull request. This |
|
| 104 |
+ might be `src/registry`, `docs/source/api`, or any other part of the repo. |
|
| 105 |
+ |
|
| 106 |
+* Step 2: Find the `MAINTAINERS` file which affects this directory. If the |
|
| 107 |
+ directory itself does not have a `MAINTAINERS` file, work your way up |
|
| 108 |
+ the repo hierarchy until you find one. |
|
| 109 |
+ |
|
| 110 |
+There is also a `hacks/getmaintainers.sh` script that will print out the |
|
| 111 |
+maintainers for a specified directory. |
|
| 112 |
+ |
|
| 113 |
+### I'm a maintainer, and I'm going on holiday |
|
| 114 |
+ |
|
| 115 |
+Please let your co-maintainers and other contributors know by raising a pull |
|
| 116 |
+request that comments out your `MAINTAINERS` file entry using a `#`. |
|
| 117 |
+ |
|
| 118 |
+### I'm a maintainer. Should I make pull requests too? |
|
| 119 |
+ |
|
| 120 |
+Yes. Nobody should ever push to master directly. All changes should be |
|
| 121 |
+made through a pull request. |
|
| 122 |
+ |
|
| 123 |
+### Who assigns maintainers? |
|
| 124 |
+ |
|
| 125 |
+Solomon has final `LGTM` approval for all pull requests to `MAINTAINERS` files. |
|
| 126 |
+ |
|
| 127 |
+### How is this process changed? |
|
| 128 |
+ |
|
| 129 |
+Just like everything else: by making a pull request :) |
| 0 | 130 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,336 @@ |
| 0 |
+# Dear Packager, |
|
| 1 |
+ |
|
| 2 |
+If you are looking to make Docker available on your favorite software |
|
| 3 |
+distribution, this document is for you. It summarizes the requirements for |
|
| 4 |
+building and running the Docker client and the Docker daemon. |
|
| 5 |
+ |
|
| 6 |
+## Getting Started |
|
| 7 |
+ |
|
| 8 |
+We want to help you package Docker successfully. Before doing any packaging, a |
|
| 9 |
+good first step is to introduce yourself on the [docker-dev mailing |
|
| 10 |
+list](https://groups.google.com/d/forum/docker-dev), explain what you're trying |
|
| 11 |
+to achieve, and tell us how we can help. Don't worry, we don't bite! There might |
|
| 12 |
+even be someone already working on packaging for the same distro! |
|
| 13 |
+ |
|
| 14 |
+You can also join the IRC channel - #docker and #docker-dev on Freenode are both |
|
| 15 |
+active and friendly. |
|
| 16 |
+ |
|
| 17 |
+We like to refer to Tianon ("@tianon" on GitHub and "tianon" on IRC) as our
|
|
| 18 |
+"Packagers Relations", since he's always working to make sure our packagers have |
|
| 19 |
+a good, healthy upstream to work with (both in our communication and in our |
|
| 20 |
+build scripts). If you're having any kind of trouble, feel free to ping him |
|
| 21 |
+directly. He also likes to keep track of what distributions we have packagers |
|
| 22 |
+for, so feel free to reach out to him even just to say "Hi!" |
|
| 23 |
+ |
|
| 24 |
+## Package Name |
|
| 25 |
+ |
|
| 26 |
+If possible, your package should be called "docker". If that name is already |
|
| 27 |
+taken, a second choice is "lxc-docker", but with the caveat that "LXC" is now an |
|
| 28 |
+optional dependency (as noted below). Another possible choice is "docker.io". |
|
| 29 |
+ |
|
| 30 |
+## Official Build vs Distro Build |
|
| 31 |
+ |
|
| 32 |
+The Docker project maintains its own build and release toolchain. It is pretty |
|
| 33 |
+neat and entirely based on Docker (surprise!). This toolchain is the canonical |
|
| 34 |
+way to build Docker. We encourage you to give it a try, and if the circumstances |
|
| 35 |
+allow you to use it, we recommend that you do. |
|
| 36 |
+ |
|
| 37 |
+You might not be able to use the official build toolchain - usually because your |
|
| 38 |
+distribution has a toolchain and packaging policy of its own. We get it! Your |
|
| 39 |
+house, your rules. The rest of this document should give you the information you |
|
| 40 |
+need to package Docker your way, without denaturing it in the process. |
|
| 41 |
+ |
|
| 42 |
+## Build Dependencies |
|
| 43 |
+ |
|
| 44 |
+To build Docker, you will need the following: |
|
| 45 |
+ |
|
| 46 |
+* A recent version of git and mercurial |
|
| 47 |
+* Go version 1.3 or later |
|
| 48 |
+* A clean checkout of the source added to a valid [Go |
|
| 49 |
+ workspace](http://golang.org/doc/code.html#Workspaces) under the path |
|
| 50 |
+ *src/github.com/docker/docker* (unless you plan to use `AUTO_GOPATH`, |
|
| 51 |
+ explained in more detail below). |
|
| 52 |
+ |
|
| 53 |
+To build the Docker daemon, you will additionally need: |
|
| 54 |
+ |
|
| 55 |
+* An amd64/x86_64 machine running Linux |
|
| 56 |
+* SQLite version 3.7.9 or later |
|
| 57 |
+* libdevmapper version 1.02.68-cvs (2012-01-26) or later from lvm2 version |
|
| 58 |
+ 2.02.89 or later |
|
| 59 |
+* btrfs-progs version 3.8 or later (including commit e5cb128 from 2013-01-07) |
|
| 60 |
+ for the necessary btrfs headers |
|
| 61 |
+ |
|
| 62 |
+Be sure to also check out Docker's Dockerfile for the most up-to-date list of |
|
| 63 |
+these build-time dependencies. |
|
| 64 |
+ |
|
| 65 |
+### Go Dependencies |
|
| 66 |
+ |
|
| 67 |
+All Go dependencies are vendored under "./vendor". They are used by the official |
|
| 68 |
+build, so the source of truth for the current version of each dependency is |
|
| 69 |
+whatever is in "./vendor". |
|
| 70 |
+ |
|
| 71 |
+To use the vendored dependencies, simply make sure the path to "./vendor" is |
|
| 72 |
+included in `GOPATH` (or use `AUTO_GOPATH`, as explained below). |
|
| 73 |
+ |
|
| 74 |
+If you would rather (or must, due to distro policy) package these dependencies |
|
| 75 |
+yourself, take a look at "./hack/vendor.sh" for an easy-to-parse list of the |
|
| 76 |
+exact version for each. |
|
| 77 |
+ |
|
| 78 |
+NOTE: if you're not able to package the exact version (to the exact commit) of a |
|
| 79 |
+given dependency, please get in touch so we can remediate! Who knows what |
|
| 80 |
+discrepancies can be caused by even the slightest deviation. We promise to do |
|
| 81 |
+our best to make everybody happy. |
|
| 82 |
+ |
|
| 83 |
+## Stripping Binaries |
|
| 84 |
+ |
|
| 85 |
+Please, please, please do not strip any compiled binaries. This is really |
|
| 86 |
+important. |
|
| 87 |
+ |
|
| 88 |
+In our own testing, stripping the resulting binaries sometimes results in a |
|
| 89 |
+binary that appears to work, but more often causes random panics, segfaults, and |
|
| 90 |
+other issues. Even if the binary appears to work, please don't strip. |
|
| 91 |
+ |
|
| 92 |
+See the following quotes from Dave Cheney, which explain this position better |
|
| 93 |
+from the upstream Golang perspective. |
|
| 94 |
+ |
|
| 95 |
+### [go issue #5855, comment #3](https://code.google.com/p/go/issues/detail?id=5855#c3) |
|
| 96 |
+ |
|
| 97 |
+> Super super important: Do not strip go binaries or archives. It isn't tested, |
|
| 98 |
+> often breaks, and doesn't work. |
|
| 99 |
+ |
|
| 100 |
+### [launchpad golang issue #1200255, comment #8](https://bugs.launchpad.net/ubuntu/+source/golang/+bug/1200255/comments/8) |
|
| 101 |
+ |
|
| 102 |
+> To quote myself: "Please do not strip Go binaries, it is not supported, not |
|
| 103 |
+> tested, is often broken, and doesn't do what you want" |
|
| 104 |
+> |
|
| 105 |
+> To unpack that a bit |
|
| 106 |
+> |
|
| 107 |
+> * not supported, as in, we don't support it, and recommend against it when |
|
| 108 |
+> asked |
|
| 109 |
+> * not tested, we don't test stripped binaries as part of the build CI process |
|
| 110 |
+> * is often broken, stripping a go binary will produce anywhere from no, to |
|
| 111 |
+> subtle, to outright execution failure, see above |
|
| 112 |
+ |
|
| 113 |
+### [launchpad golang issue #1200255, comment #13](https://bugs.launchpad.net/ubuntu/+source/golang/+bug/1200255/comments/13) |
|
| 114 |
+ |
|
| 115 |
+> To clarify my previous statements. |
|
| 116 |
+> |
|
| 117 |
+> * I do not disagree with the debian policy, it is there for a good reason |
|
| 118 |
+> * Having said that, it stripping Go binaries doesn't work, and nobody is |
|
| 119 |
+> looking at making it work, so there is that. |
|
| 120 |
+> |
|
| 121 |
+> Thanks for patching the build formula. |
|
| 122 |
+ |
|
| 123 |
+## Building Docker |
|
| 124 |
+ |
|
| 125 |
+Please use our build script ("./hack/make.sh") for all your compilation of
|
|
| 126 |
+Docker. If there's something you need that it isn't doing, or something it could |
|
| 127 |
+be doing to make your life as a packager easier, please get in touch with Tianon |
|
| 128 |
+and help us rectify the situation. Chances are good that other packagers have |
|
| 129 |
+probably run into the same problems and a fix might already be in the works, but |
|
| 130 |
+none of us will know for sure unless you harass Tianon about it. :) |
|
| 131 |
+ |
|
| 132 |
+All the commands listed within this section should be run with the Docker source |
|
| 133 |
+checkout as the current working directory. |
|
| 134 |
+ |
|
| 135 |
+### `AUTO_GOPATH` |
|
| 136 |
+ |
|
| 137 |
+If you'd rather not be bothered with the hassles that setting up `GOPATH` |
|
| 138 |
+appropriately can be, and prefer to just get a "build that works", you should |
|
| 139 |
+add something similar to this to whatever script or process you're using to |
|
| 140 |
+build Docker: |
|
| 141 |
+ |
|
| 142 |
+```bash |
|
| 143 |
+export AUTO_GOPATH=1 |
|
| 144 |
+``` |
|
| 145 |
+ |
|
| 146 |
+This will cause the build scripts to set up a reasonable `GOPATH` that |
|
| 147 |
+automatically and properly includes both docker/docker from the local |
|
| 148 |
+directory, and the local "./vendor" directory as necessary. |
|
| 149 |
+ |
|
| 150 |
+### `DOCKER_BUILDTAGS` |
|
| 151 |
+ |
|
| 152 |
+If you're building a binary that may need to be used on platforms that include |
|
| 153 |
+AppArmor, you will need to set `DOCKER_BUILDTAGS` as follows: |
|
| 154 |
+```bash |
|
| 155 |
+export DOCKER_BUILDTAGS='apparmor' |
|
| 156 |
+``` |
|
| 157 |
+ |
|
| 158 |
+If you're building a binary that may need to be used on platforms that include |
|
| 159 |
+SELinux, you will need to use the `selinux` build tag: |
|
| 160 |
+```bash |
|
| 161 |
+export DOCKER_BUILDTAGS='selinux' |
|
| 162 |
+``` |
|
| 163 |
+ |
|
| 164 |
+If your version of btrfs-progs is < 3.16.1 (also called btrfs-tools), then you |
|
| 165 |
+will need the following tag to not check for btrfs version headers: |
|
| 166 |
+```bash |
|
| 167 |
+export DOCKER_BUILDTAGS='btrfs_noversion' |
|
| 168 |
+``` |
|
| 169 |
+ |
|
| 170 |
+There are build tags for disabling graphdrivers as well. By default, support |
|
| 171 |
+for all graphdrivers are built in. |
|
| 172 |
+ |
|
| 173 |
+To disable btrfs: |
|
| 174 |
+```bash |
|
| 175 |
+export DOCKER_BUILDTAGS='exclude_graphdriver_btrfs' |
|
| 176 |
+``` |
|
| 177 |
+ |
|
| 178 |
+To disable devicemapper: |
|
| 179 |
+```bash |
|
| 180 |
+export DOCKER_BUILDTAGS='exclude_graphdriver_devicemapper' |
|
| 181 |
+``` |
|
| 182 |
+ |
|
| 183 |
+To disable aufs: |
|
| 184 |
+```bash |
|
| 185 |
+export DOCKER_BUILDTAGS='exclude_graphdriver_aufs' |
|
| 186 |
+``` |
|
| 187 |
+ |
|
| 188 |
+NOTE: if you need to set more than one build tag, space separate them: |
|
| 189 |
+```bash |
|
| 190 |
+export DOCKER_BUILDTAGS='apparmor selinux exclude_graphdriver_aufs' |
|
| 191 |
+``` |
|
| 192 |
+ |
|
| 193 |
+### Static Daemon |
|
| 194 |
+ |
|
| 195 |
+If it is feasible within the constraints of your distribution, you should |
|
| 196 |
+seriously consider packaging Docker as a single static binary. A good comparison |
|
| 197 |
+is Busybox, which is often packaged statically as a feature to enable mass |
|
| 198 |
+portability. Because of the unique way Docker operates, being similarly static |
|
| 199 |
+is a "feature". |
|
| 200 |
+ |
|
| 201 |
+To build a static Docker daemon binary, run the following command (first |
|
| 202 |
+ensuring that all the necessary libraries are available in static form for |
|
| 203 |
+linking - see the "Build Dependencies" section above, and the relevant lines |
|
| 204 |
+within Docker's own Dockerfile that set up our official build environment): |
|
| 205 |
+ |
|
| 206 |
+```bash |
|
| 207 |
+./hack/make.sh binary |
|
| 208 |
+``` |
|
| 209 |
+ |
|
| 210 |
+This will create a static binary under |
|
| 211 |
+"./bundles/$VERSION/binary/docker-$VERSION", where "$VERSION" is the contents of |
|
| 212 |
+the file "./VERSION". This binary is usually installed somewhere like |
|
| 213 |
+"/usr/bin/docker". |
|
| 214 |
+ |
|
| 215 |
+### Dynamic Daemon / Client-only Binary |
|
| 216 |
+ |
|
| 217 |
+If you are only interested in a Docker client binary, set `DOCKER_CLIENTONLY` to a non-empty value using something similar to the following: (which will prevent the extra step of compiling dockerinit) |
|
| 218 |
+ |
|
| 219 |
+```bash |
|
| 220 |
+export DOCKER_CLIENTONLY=1 |
|
| 221 |
+``` |
|
| 222 |
+ |
|
| 223 |
+If you need to (due to distro policy, distro library availability, or for other |
|
| 224 |
+reasons) create a dynamically compiled daemon binary, or if you are only |
|
| 225 |
+interested in creating a client binary for Docker, use something similar to the |
|
| 226 |
+following: |
|
| 227 |
+ |
|
| 228 |
+```bash |
|
| 229 |
+./hack/make.sh dynbinary |
|
| 230 |
+``` |
|
| 231 |
+ |
|
| 232 |
+This will create "./bundles/$VERSION/dynbinary/docker-$VERSION", which for |
|
| 233 |
+client-only builds is the important file to grab and install as appropriate. |
|
| 234 |
+ |
|
| 235 |
+For daemon builds, you will also need to grab and install |
|
| 236 |
+"./bundles/$VERSION/dynbinary/dockerinit-$VERSION", which is created from the |
|
| 237 |
+minimal set of Docker's codebase that _must_ be compiled statically (and is thus |
|
| 238 |
+a pure static binary). The acceptable locations Docker will search for this file |
|
| 239 |
+are as follows (in order): |
|
| 240 |
+ |
|
| 241 |
+* as "dockerinit" in the same directory as the daemon binary (ie, if docker is |
|
| 242 |
+ installed at "/usr/bin/docker", then "/usr/bin/dockerinit" will be the first |
|
| 243 |
+ place this file is searched for) |
|
| 244 |
+* "/usr/libexec/docker/dockerinit" or "/usr/local/libexec/docker/dockerinit" |
|
| 245 |
+ ([FHS 3.0 Draft](http://www.linuxbase.org/betaspecs/fhs/fhs.html#usrlibexec)) |
|
| 246 |
+* "/usr/lib/docker/dockerinit" or "/usr/local/lib/docker/dockerinit" ([FHS |
|
| 247 |
+ 2.3](http://refspecs.linuxfoundation.org/FHS_2.3/fhs-2.3.html#USRLIBLIBRARIESFORPROGRAMMINGANDPA)) |
|
| 248 |
+ |
|
| 249 |
+If (and please, only if) one of the paths above is insufficient due to distro |
|
| 250 |
+policy or similar issues, you may use the `DOCKER_INITPATH` environment variable |
|
| 251 |
+at compile-time as follows to set a different path for Docker to search: |
|
| 252 |
+ |
|
| 253 |
+```bash |
|
| 254 |
+export DOCKER_INITPATH=/usr/lib/docker.io/dockerinit |
|
| 255 |
+``` |
|
| 256 |
+ |
|
| 257 |
+If you find yourself needing this, please don't hesitate to reach out to Tianon |
|
| 258 |
+to see if it would be reasonable or helpful to add more paths to Docker's list, |
|
| 259 |
+especially if there's a relevant standard worth referencing (such as the FHS). |
|
| 260 |
+ |
|
| 261 |
+Also, it goes without saying, but for the purposes of the daemon please consider |
|
| 262 |
+these two binaries ("docker" and "dockerinit") as if they were a single unit.
|
|
| 263 |
+Mixing and matching can cause undesired consequences, and will fail to run |
|
| 264 |
+properly. |
|
| 265 |
+ |
|
| 266 |
+## System Dependencies |
|
| 267 |
+ |
|
| 268 |
+### Runtime Dependencies |
|
| 269 |
+ |
|
| 270 |
+To function properly, the Docker daemon needs the following software to be |
|
| 271 |
+installed and available at runtime: |
|
| 272 |
+ |
|
| 273 |
+* iptables version 1.4 or later |
|
| 274 |
+* procps (or similar provider of a "ps" executable) |
|
| 275 |
+* e2fsprogs version 1.4.12 or later (in use: mkfs.ext4, mkfs.xfs, tune2fs) |
|
| 276 |
+* XZ Utils version 4.9 or later |
|
| 277 |
+* a [properly |
|
| 278 |
+ mounted](https://github.com/tianon/cgroupfs-mount/blob/master/cgroupfs-mount) |
|
| 279 |
+ cgroupfs hierarchy (having a single, all-encompassing "cgroup" mount point |
|
| 280 |
+ [is](https://github.com/docker/docker/issues/2683) |
|
| 281 |
+ [not](https://github.com/docker/docker/issues/3485) |
|
| 282 |
+ [sufficient](https://github.com/docker/docker/issues/4568)) |
|
| 283 |
+ |
|
| 284 |
+Additionally, the Docker client needs the following software to be installed and |
|
| 285 |
+available at runtime: |
|
| 286 |
+ |
|
| 287 |
+* Git version 1.7 or later |
|
| 288 |
+ |
|
| 289 |
+### Kernel Requirements |
|
| 290 |
+ |
|
| 291 |
+The Docker daemon has very specific kernel requirements. Most pre-packaged |
|
| 292 |
+kernels already include the necessary options enabled. If you are building your |
|
| 293 |
+own kernel, you will either need to discover the options necessary via trial and |
|
| 294 |
+error, or check out the [Gentoo |
|
| 295 |
+ebuild](https://github.com/tianon/docker-overlay/blob/master/app-emulation/docker/docker-9999.ebuild), |
|
| 296 |
+in which a list is maintained (and if there are any issues or discrepancies in |
|
| 297 |
+that list, please contact Tianon so they can be rectified). |
|
| 298 |
+ |
|
| 299 |
+Note that in client mode, there are no specific kernel requirements, and that |
|
| 300 |
+the client will even run on alternative platforms such as Mac OS X / Darwin. |
|
| 301 |
+ |
|
| 302 |
+### Optional Dependencies |
|
| 303 |
+ |
|
| 304 |
+Some of Docker's features are activated by using optional command-line flags or |
|
| 305 |
+by having support for them in the kernel or userspace. A few examples include: |
|
| 306 |
+ |
|
| 307 |
+* LXC execution driver (requires version 1.0 or later of the LXC utility scripts) |
|
| 308 |
+* AUFS graph driver (requires AUFS patches/support enabled in the kernel, and at |
|
| 309 |
+ least the "auplink" utility from aufs-tools) |
|
| 310 |
+* BTRFS graph driver (requires BTRFS support enabled in the kernel) |
|
| 311 |
+ |
|
| 312 |
+## Daemon Init Script |
|
| 313 |
+ |
|
| 314 |
+Docker expects to run as a daemon at machine startup. Your package will need to |
|
| 315 |
+include a script for your distro's process supervisor of choice. Be sure to |
|
| 316 |
+check out the "contrib/init" folder in case a suitable init script already |
|
| 317 |
+exists (and if one does not, contact Tianon about whether it might be |
|
| 318 |
+appropriate for your distro's init script to live there too!). |
|
| 319 |
+ |
|
| 320 |
+In general, Docker should be run as root, similar to the following: |
|
| 321 |
+ |
|
| 322 |
+```bash |
|
| 323 |
+docker -d |
|
| 324 |
+``` |
|
| 325 |
+ |
|
| 326 |
+Generally, a `DOCKER_OPTS` variable of some kind is available for adding more |
|
| 327 |
+flags (such as changing the graph driver to use BTRFS, switching the location of |
|
| 328 |
+"/var/lib/docker", etc). |
|
| 329 |
+ |
|
| 330 |
+## Communicate |
|
| 331 |
+ |
|
| 332 |
+As a final note, please do feel free to reach out to Tianon at any time for |
|
| 333 |
+pretty much anything. He really does love hearing from our packagers and wants |
|
| 334 |
+to make sure we're not being a "hostile upstream". As should be a given, we |
|
| 335 |
+appreciate the work our packagers do to make sure we have broad distribution! |
| 0 | 336 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,19 @@ |
| 0 |
+# Docker principles |
|
| 1 |
+ |
|
| 2 |
+In the design and development of Docker we try to follow these principles: |
|
| 3 |
+ |
|
| 4 |
+(Work in progress) |
|
| 5 |
+ |
|
| 6 |
+* Don't try to replace every tool. Instead, be an ingredient to improve them. |
|
| 7 |
+* Less code is better. |
|
| 8 |
+* Less components is better. Do you really need to add one more class? |
|
| 9 |
+* 50 lines of straightforward, readable code is better than 10 lines of magic that nobody can understand. |
|
| 10 |
+* Don't do later what you can do now. "//FIXME: refactor" is not acceptable in new code. |
|
| 11 |
+* When hesitating between 2 options, choose the one that is easier to reverse. |
|
| 12 |
+* No is temporary, Yes is forever. If you're not sure about a new feature, say no. You can change your mind later. |
|
| 13 |
+* Containers must be portable to the greatest possible number of machines. Be suspicious of any change which makes machines less interchangeable. |
|
| 14 |
+* The less moving parts in a container, the better. |
|
| 15 |
+* Don't merge it unless you document it. |
|
| 16 |
+* Don't document it unless you can keep it up-to-date. |
|
| 17 |
+* Don't merge it unless you test it! |
|
| 18 |
+* Everyone's problem is slightly different. Focus on the part that is the same for everyone, and solve that. |
| 0 | 19 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,24 @@ |
| 0 |
+# Hacking on Docker |
|
| 1 |
+ |
|
| 2 |
+The hack/ directory holds information and tools for everyone involved in the process of creating and |
|
| 3 |
+distributing Docker, specifically: |
|
| 4 |
+ |
|
| 5 |
+## Guides |
|
| 6 |
+ |
|
| 7 |
+If you're a *contributor* or aspiring contributor, you should read CONTRIBUTORS.md. |
|
| 8 |
+ |
|
| 9 |
+If you're a *maintainer* or aspiring maintainer, you should read MAINTAINERS.md. |
|
| 10 |
+ |
|
| 11 |
+If you're a *packager* or aspiring packager, you should read PACKAGERS.md. |
|
| 12 |
+ |
|
| 13 |
+If you're a maintainer in charge of a *release*, you should read RELEASE-CHECKLIST.md. |
|
| 14 |
+ |
|
| 15 |
+## Roadmap |
|
| 16 |
+ |
|
| 17 |
+A high-level roadmap is available at ROADMAP.md. |
|
| 18 |
+ |
|
| 19 |
+ |
|
| 20 |
+## Build tools |
|
| 21 |
+ |
|
| 22 |
+make.sh is the primary build tool for docker. It is used for compiling the official binary, |
|
| 23 |
+running the test suite, and pushing releases. |
| 0 | 24 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,303 @@ |
| 0 |
+# Release Checklist |
|
| 1 |
+## A maintainer's guide to releasing Docker |
|
| 2 |
+ |
|
| 3 |
+So you're in charge of a Docker release? Cool. Here's what to do. |
|
| 4 |
+ |
|
| 5 |
+If your experience deviates from this document, please document the changes |
|
| 6 |
+to keep it up-to-date. |
|
| 7 |
+ |
|
| 8 |
+It is important to note that this document assumes that the git remote in your |
|
| 9 |
+repository that corresponds to "https://github.com/docker/docker" is named |
|
| 10 |
+"origin". If yours is not (for example, if you've chosen to name it "upstream" |
|
| 11 |
+or something similar instead), be sure to adjust the listed snippets for your |
|
| 12 |
+local environment accordingly. If you are not sure what your upstream remote is |
|
| 13 |
+named, use a command like `git remote -v` to find out. |
|
| 14 |
+ |
|
| 15 |
+If you don't have an upstream remote, you can add one easily using something |
|
| 16 |
+like: |
|
| 17 |
+ |
|
| 18 |
+```bash |
|
| 19 |
+export GITHUBUSER="YOUR_GITHUB_USER" |
|
| 20 |
+git remote add origin https://github.com/docker/docker.git |
|
| 21 |
+git remote add $GITHUBUSER git@github.com:$GITHUBUSER/docker.git |
|
| 22 |
+``` |
|
| 23 |
+ |
|
| 24 |
+### 1. Pull from master and create a release branch |
|
| 25 |
+ |
|
| 26 |
+Note: Even for major releases, all of X, Y and Z in vX.Y.Z must be specified (e.g. v1.0.0). |
|
| 27 |
+ |
|
| 28 |
+```bash |
|
| 29 |
+export VERSION=vX.Y.Z |
|
| 30 |
+git fetch origin |
|
| 31 |
+git branch -D release || true |
|
| 32 |
+git checkout --track origin/release |
|
| 33 |
+git checkout -b bump_$VERSION |
|
| 34 |
+``` |
|
| 35 |
+ |
|
| 36 |
+If it's a regular release, we usually merge master. |
|
| 37 |
+```bash |
|
| 38 |
+git merge origin/master |
|
| 39 |
+``` |
|
| 40 |
+ |
|
| 41 |
+Otherwise, if it is a hotfix release, we cherry-pick only the commits we want. |
|
| 42 |
+```bash |
|
| 43 |
+# get the commits ids we want to cherry-pick |
|
| 44 |
+git log |
|
| 45 |
+# cherry-pick the commits starting from the oldest one, without including merge commits |
|
| 46 |
+git cherry-pick <commit-id> |
|
| 47 |
+git cherry-pick <commit-id> |
|
| 48 |
+... |
|
| 49 |
+``` |
|
| 50 |
+ |
|
| 51 |
+### 2. Update CHANGELOG.md |
|
| 52 |
+ |
|
| 53 |
+You can run this command for reference with git 2.0: |
|
| 54 |
+ |
|
| 55 |
+```bash |
|
| 56 |
+git fetch --tags |
|
| 57 |
+LAST_VERSION=$(git tag -l --sort=-version:refname "v*" | grep -E 'v[0-9\.]+$' | head -1) |
|
| 58 |
+git log --stat $LAST_VERSION..bump_$VERSION |
|
| 59 |
+``` |
|
| 60 |
+ |
|
| 61 |
+If you don't have git 2.0 but have a sort command that supports `-V`: |
|
| 62 |
+```bash |
|
| 63 |
+git fetch --tags |
|
| 64 |
+LAST_VERSION=$(git tag -l | grep -E 'v[0-9\.]+$' | sort -rV | head -1) |
|
| 65 |
+git log --stat $LAST_VERSION..bump_$VERSION |
|
| 66 |
+``` |
|
| 67 |
+ |
|
| 68 |
+If releasing a major version (X or Y increased in vX.Y.Z), simply listing notable user-facing features is sufficient. |
|
| 69 |
+```markdown |
|
| 70 |
+#### Notable features since <last major version> |
|
| 71 |
+* New docker command to do something useful |
|
| 72 |
+* Remote API change (deprecating old version) |
|
| 73 |
+* Performance improvements in some usecases |
|
| 74 |
+* ... |
|
| 75 |
+``` |
|
| 76 |
+ |
|
| 77 |
+For minor releases (only Z increases in vX.Y.Z), provide a list of user-facing changes. |
|
| 78 |
+Each change should be listed under a category heading formatted as `#### CATEGORY`. |
|
| 79 |
+ |
|
| 80 |
+`CATEGORY` should describe which part of the project is affected. |
|
| 81 |
+ Valid categories are: |
|
| 82 |
+ * Builder |
|
| 83 |
+ * Documentation |
|
| 84 |
+ * Hack |
|
| 85 |
+ * Packaging |
|
| 86 |
+ * Remote API |
|
| 87 |
+ * Runtime |
|
| 88 |
+ * Other (please use this category sparingly) |
|
| 89 |
+ |
|
| 90 |
+Each change should be formatted as `BULLET DESCRIPTION`, given: |
|
| 91 |
+ |
|
| 92 |
+* BULLET: either `-`, `+` or `*`, to indicate a bugfix, new feature or |
|
| 93 |
+ upgrade, respectively. |
|
| 94 |
+ |
|
| 95 |
+* DESCRIPTION: a concise description of the change that is relevant to the |
|
| 96 |
+ end-user, using the present tense. Changes should be described in terms |
|
| 97 |
+ of how they affect the user, for example "Add new feature X which allows Y", |
|
| 98 |
+ "Fix bug which caused X", "Increase performance of Y". |
|
| 99 |
+ |
|
| 100 |
+EXAMPLES: |
|
| 101 |
+ |
|
| 102 |
+```markdown |
|
| 103 |
+## 0.3.6 (1995-12-25) |
|
| 104 |
+ |
|
| 105 |
+#### Builder |
|
| 106 |
+ |
|
| 107 |
++ 'docker build -t FOO .' applies the tag FOO to the newly built image |
|
| 108 |
+ |
|
| 109 |
+#### Remote API |
|
| 110 |
+ |
|
| 111 |
+- Fix a bug in the optional unix socket transport |
|
| 112 |
+ |
|
| 113 |
+#### Runtime |
|
| 114 |
+ |
|
| 115 |
+* Improve detection of kernel version |
|
| 116 |
+``` |
|
| 117 |
+ |
|
| 118 |
+If you need a list of contributors between the last major release and the |
|
| 119 |
+current bump branch, use something like: |
|
| 120 |
+```bash |
|
| 121 |
+git log --format='%aN <%aE>' v0.7.0...bump_v0.8.0 | sort -uf |
|
| 122 |
+``` |
|
| 123 |
+Obviously, you'll need to adjust version numbers as necessary. If you just need |
|
| 124 |
+a count, add a simple `| wc -l`. |
|
| 125 |
+ |
|
| 126 |
+### 3. Change the contents of the VERSION file |
|
| 127 |
+ |
|
| 128 |
+```bash |
|
| 129 |
+echo ${VERSION#v} > VERSION
|
|
| 130 |
+``` |
|
| 131 |
+ |
|
| 132 |
+### 4. Test the docs |
|
| 133 |
+ |
|
| 134 |
+Make sure that your tree includes documentation for any modified or |
|
| 135 |
+new features, syntax or semantic changes. |
|
| 136 |
+ |
|
| 137 |
+To test locally: |
|
| 138 |
+ |
|
| 139 |
+```bash |
|
| 140 |
+make docs |
|
| 141 |
+``` |
|
| 142 |
+ |
|
| 143 |
+To make a shared test at http://beta-docs.docker.io: |
|
| 144 |
+ |
|
| 145 |
+(You will need the `awsconfig` file added to the `docs/` dir) |
|
| 146 |
+ |
|
| 147 |
+```bash |
|
| 148 |
+make AWS_S3_BUCKET=beta-docs.docker.io BUILD_ROOT=yes docs-release |
|
| 149 |
+``` |
|
| 150 |
+ |
|
| 151 |
+### 5. Commit and create a pull request to the "release" branch |
|
| 152 |
+ |
|
| 153 |
+```bash |
|
| 154 |
+git add VERSION CHANGELOG.md |
|
| 155 |
+git commit -m "Bump version to $VERSION" |
|
| 156 |
+git push $GITHUBUSER bump_$VERSION |
|
| 157 |
+echo "https://github.com/$GITHUBUSER/docker/compare/docker:release...$GITHUBUSER:bump_$VERSION?expand=1" |
|
| 158 |
+``` |
|
| 159 |
+ |
|
| 160 |
+That last command will give you the proper link to visit to ensure that you |
|
| 161 |
+open the PR against the "release" branch instead of accidentally against |
|
| 162 |
+"master" (like so many brave souls before you already have). |
|
| 163 |
+ |
|
| 164 |
+### 6. Get 2 other maintainers to validate the pull request |
|
| 165 |
+ |
|
| 166 |
+### 7. Publish binaries |
|
| 167 |
+ |
|
| 168 |
+To run this you will need access to the release credentials. Get them from the Core maintainers. |
|
| 169 |
+ |
|
| 170 |
+Replace "..." with the respective credentials: |
|
| 171 |
+ |
|
| 172 |
+```bash |
|
| 173 |
+docker build -t docker . |
|
| 174 |
+docker run \ |
|
| 175 |
+ -e AWS_S3_BUCKET=test.docker.com \ |
|
| 176 |
+ -e AWS_ACCESS_KEY="..." \ |
|
| 177 |
+ -e AWS_SECRET_KEY="..." \ |
|
| 178 |
+ -e GPG_PASSPHRASE="..." \ |
|
| 179 |
+ -i -t --privileged \ |
|
| 180 |
+ docker \ |
|
| 181 |
+ hack/release.sh |
|
| 182 |
+``` |
|
| 183 |
+ |
|
| 184 |
+It will run the test suite, build the binaries and packages, |
|
| 185 |
+and upload to the specified bucket (you should use test.docker.com for |
|
| 186 |
+general testing, and once everything is fine, switch to get.docker.com as |
|
| 187 |
+noted below). |
|
| 188 |
+ |
|
| 189 |
+After the binaries and packages are uploaded to test.docker.com, make sure |
|
| 190 |
+they get tested in both Ubuntu and Debian for any obvious installation |
|
| 191 |
+issues or runtime issues. |
|
| 192 |
+ |
|
| 193 |
+Announcing on IRC in both `#docker` and `#docker-dev` is a great way to get |
|
| 194 |
+help testing! An easy way to get some useful links for sharing: |
|
| 195 |
+ |
|
| 196 |
+```bash |
|
| 197 |
+echo "Ubuntu/Debian: https://test.docker.com/ubuntu or curl -sSL https://test.docker.com/ | sh" |
|
| 198 |
+echo "Linux 64bit binary: https://test.docker.com/builds/Linux/x86_64/docker-${VERSION#v}"
|
|
| 199 |
+echo "Darwin/OSX 64bit client binary: https://test.docker.com/builds/Darwin/x86_64/docker-${VERSION#v}"
|
|
| 200 |
+echo "Darwin/OSX 32bit client binary: https://test.docker.com/builds/Darwin/i386/docker-${VERSION#v}"
|
|
| 201 |
+echo "Linux 64bit tgz: https://test.docker.com/builds/Linux/x86_64/docker-${VERSION#v}.tgz"
|
|
| 202 |
+``` |
|
| 203 |
+ |
|
| 204 |
+Once they're tested and reasonably believed to be working, run against |
|
| 205 |
+get.docker.com: |
|
| 206 |
+ |
|
| 207 |
+```bash |
|
| 208 |
+docker run \ |
|
| 209 |
+ -e AWS_S3_BUCKET=get.docker.com \ |
|
| 210 |
+ -e AWS_ACCESS_KEY="..." \ |
|
| 211 |
+ -e AWS_SECRET_KEY="..." \ |
|
| 212 |
+ -e GPG_PASSPHRASE="..." \ |
|
| 213 |
+ -i -t --privileged \ |
|
| 214 |
+ docker \ |
|
| 215 |
+ hack/release.sh |
|
| 216 |
+``` |
|
| 217 |
+ |
|
| 218 |
+### 8. Breakathon |
|
| 219 |
+ |
|
| 220 |
+Spend several days along with the community explicitly investing time and |
|
| 221 |
+resources to try and break Docker in every possible way, documenting any |
|
| 222 |
+findings pertinent to the release. This time should be spent testing and |
|
| 223 |
+finding ways in which the release might have caused various features or upgrade |
|
| 224 |
+environments to have issues, not coding. During this time, the release is in |
|
| 225 |
+code freeze, and any additional code changes will be pushed out to the next |
|
| 226 |
+release. |
|
| 227 |
+ |
|
| 228 |
+It should include various levels of breaking Docker, beyond just using Docker |
|
| 229 |
+by the book. |
|
| 230 |
+ |
|
| 231 |
+Any issues found may still remain issues for this release, but they should be |
|
| 232 |
+documented and give appropriate warnings. |
|
| 233 |
+ |
|
| 234 |
+### 9. Apply tag |
|
| 235 |
+ |
|
| 236 |
+It's very important that we don't make the tag until after the official |
|
| 237 |
+release is uploaded to get.docker.com! |
|
| 238 |
+ |
|
| 239 |
+```bash |
|
| 240 |
+git tag -a $VERSION -m $VERSION bump_$VERSION |
|
| 241 |
+git push origin $VERSION |
|
| 242 |
+``` |
|
| 243 |
+ |
|
| 244 |
+### 10. Go to github to merge the `bump_$VERSION` branch into release |
|
| 245 |
+ |
|
| 246 |
+Don't forget to push that pretty blue button to delete the leftover |
|
| 247 |
+branch afterwards! |
|
| 248 |
+ |
|
| 249 |
+### 11. Update the docs branch |
|
| 250 |
+ |
|
| 251 |
+If this is a MAJOR.MINOR.0 release, you need to make an branch for the previous release's |
|
| 252 |
+documentation: |
|
| 253 |
+ |
|
| 254 |
+```bash |
|
| 255 |
+git checkout -b docs-$PREVIOUS_MAJOR_MINOR docs |
|
| 256 |
+git fetch |
|
| 257 |
+git reset --hard origin/docs |
|
| 258 |
+git push -f origin docs-$PREVIOUS_MAJOR_MINOR |
|
| 259 |
+``` |
|
| 260 |
+ |
|
| 261 |
+You will need the `awsconfig` file added to the `docs/` directory to contain the |
|
| 262 |
+s3 credentials for the bucket you are deploying to. |
|
| 263 |
+ |
|
| 264 |
+```bash |
|
| 265 |
+git checkout -b docs release || git checkout docs |
|
| 266 |
+git fetch |
|
| 267 |
+git reset --hard origin/release |
|
| 268 |
+git push -f origin docs |
|
| 269 |
+make AWS_S3_BUCKET=docs.docker.com BUILD_ROOT=yes docs-release |
|
| 270 |
+``` |
|
| 271 |
+ |
|
| 272 |
+The docs will appear on http://docs.docker.com/ (though there may be cached |
|
| 273 |
+versions, so its worth checking http://docs.docker.com.s3-website-us-east-1.amazonaws.com/). |
|
| 274 |
+For more information about documentation releases, see `docs/README.md`. |
|
| 275 |
+ |
|
| 276 |
+Ask Sven, or JohnC to invalidate the cloudfront cache using the CND Planet chrome applet. |
|
| 277 |
+ |
|
| 278 |
+### 12. Create a new pull request to merge release back into master |
|
| 279 |
+ |
|
| 280 |
+```bash |
|
| 281 |
+git checkout master |
|
| 282 |
+git fetch |
|
| 283 |
+git reset --hard origin/master |
|
| 284 |
+git merge origin/release |
|
| 285 |
+git checkout -b merge_release_$VERSION |
|
| 286 |
+echo ${VERSION#v}-dev > VERSION
|
|
| 287 |
+git add VERSION |
|
| 288 |
+git commit -m "Change version to $(cat VERSION)" |
|
| 289 |
+git push $GITHUBUSER merge_release_$VERSION |
|
| 290 |
+echo "https://github.com/$GITHUBUSER/docker/compare/docker:master...$GITHUBUSER:merge_release_$VERSION?expand=1" |
|
| 291 |
+``` |
|
| 292 |
+ |
|
| 293 |
+Again, get two maintainers to validate, then merge, then push that pretty |
|
| 294 |
+blue button to delete your branch. |
|
| 295 |
+ |
|
| 296 |
+### 13. Rejoice and Evangelize! |
|
| 297 |
+ |
|
| 298 |
+Congratulations! You're done. |
|
| 299 |
+ |
|
| 300 |
+Go forth and announce the glad tidings of the new release in `#docker`, |
|
| 301 |
+`#docker-dev`, on the [mailing list](https://groups.google.com/forum/#!forum/docker-dev), |
|
| 302 |
+and on Twitter! |
| 0 | 303 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,43 @@ |
| 0 |
+# Docker: Statement of Direction |
|
| 1 |
+ |
|
| 2 |
+This document is a high-level overview of where we want to take Docker. |
|
| 3 |
+It is a curated selection of planned improvements which are either important, difficult, or both. |
|
| 4 |
+ |
|
| 5 |
+For a more complete view of planned and requested improvements, see [the Github issues](https://github.com/docker/docker/issues). |
|
| 6 |
+ |
|
| 7 |
+To suggest changes to the roadmap, including additions, please write the change as if it were already in effect, and make a pull request. |
|
| 8 |
+ |
|
| 9 |
+ |
|
| 10 |
+## Orchestration |
|
| 11 |
+ |
|
| 12 |
+Orchestration touches on several aspects of multi-container applications. These include provisioning hosts with the Docker daemon, organizing and maintaining multiple Docker hosts as a cluster, composing an application using multiple containers, and handling the networking between the containers across the hosts. |
|
| 13 |
+ |
|
| 14 |
+Today, users accomplish this using a combination of glue scripts and various tools, like Shipper, Deis, Pipeworks, etc. |
|
| 15 |
+ |
|
| 16 |
+We want the Docker API to support all aspects of orchestration natively, so that these tools can cleanly and seamlessly integrate into the Docker user experience, and remain interoperable with each other. |
|
| 17 |
+ |
|
| 18 |
+## Networking |
|
| 19 |
+ |
|
| 20 |
+The current Docker networking model works for communication between containers all residing on the same host. Since Docker applications in production are made up of many containers deployed across multiple hosts (and sometimes multiple data centers), Docker’s networking model will evolve to accommodate this. An aspect of this evolution includes providing a Networking API to enable alternative implementations. |
|
| 21 |
+ |
|
| 22 |
+## Storage |
|
| 23 |
+ |
|
| 24 |
+Currently, stateful Docker containers are pinned to specific hosts during their lifetime. To support additional resiliency, capacity management, and load balancing we want to enable live stateful containers to dynamically migrate between hosts. While the Docker Project will provide a “batteries included” implementation for a great out-of-box experience, we will also provide an API for alternative implementations. |
|
| 25 |
+ |
|
| 26 |
+## Microsoft Windows |
|
| 27 |
+ |
|
| 28 |
+The next Microsoft Windows Server will ship with primitives to support container-based process isolation and resource management. The Docker Project will guide contributors and maintainers developing native Microsoft versions of the Docker Remote API client and Docker daemon to take advantage of these primitives. |
|
| 29 |
+ |
|
| 30 |
+## Provenance |
|
| 31 |
+ |
|
| 32 |
+When assembling Docker applications we want users to be confident that images they didn’t create themselves are safe to use and build upon. Provenance gives users the capability to digitally verify the inputs and processes constituting an image’s origins and lifecycle events. |
|
| 33 |
+ |
|
| 34 |
+## Plugin API |
|
| 35 |
+ |
|
| 36 |
+We want Docker to run everywhere, and to integrate with every devops tool. Those are ambitious goals, and the only way to reach them is with the Docker community. For the community to participate fully, we need an API which allows Docker to be deeply and easily customized. |
|
| 37 |
+ |
|
| 38 |
+We are working on a plugin API which will make Docker very customization-friendly. We believe it will facilitate the integrations listed above – and many more we didn’t even think about. |
|
| 39 |
+ |
|
| 40 |
+## Multi-Architecture Support |
|
| 41 |
+ |
|
| 42 |
+Our goal is to make Docker run everywhere. However, currently Docker only runs on x86_64 systems. We plan on expanding architecture support, so that Docker containers can be created and used on more architectures, including ARM, Joyent SmartOS, and Microsoft. |
| 0 | 3 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,88 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+# DinD: a wrapper script which allows docker to be run inside a docker container. |
|
| 4 |
+# Original version by Jerome Petazzoni <jerome@docker.com> |
|
| 5 |
+# See the blog post: http://blog.docker.com/2013/09/docker-can-now-run-within-docker/ |
|
| 6 |
+# |
|
| 7 |
+# This script should be executed inside a docker container in privilieged mode |
|
| 8 |
+# ('docker run --privileged', introduced in docker 0.6).
|
|
| 9 |
+ |
|
| 10 |
+# Usage: dind CMD [ARG...] |
|
| 11 |
+ |
|
| 12 |
+# apparmor sucks and Docker needs to know that it's in a container (c) @tianon |
|
| 13 |
+export container=docker |
|
| 14 |
+ |
|
| 15 |
+# First, make sure that cgroups are mounted correctly. |
|
| 16 |
+CGROUP=/cgroup |
|
| 17 |
+ |
|
| 18 |
+mkdir -p "$CGROUP" |
|
| 19 |
+ |
|
| 20 |
+if ! mountpoint -q "$CGROUP"; then |
|
| 21 |
+ mount -n -t tmpfs -o uid=0,gid=0,mode=0755 cgroup $CGROUP || {
|
|
| 22 |
+ echo >&2 'Could not make a tmpfs mount. Did you use --privileged?' |
|
| 23 |
+ exit 1 |
|
| 24 |
+ } |
|
| 25 |
+fi |
|
| 26 |
+ |
|
| 27 |
+if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then |
|
| 28 |
+ mount -t securityfs none /sys/kernel/security || {
|
|
| 29 |
+ echo >&2 'Could not mount /sys/kernel/security.' |
|
| 30 |
+ echo >&2 'AppArmor detection and -privileged mode might break.' |
|
| 31 |
+ } |
|
| 32 |
+fi |
|
| 33 |
+ |
|
| 34 |
+# Mount the cgroup hierarchies exactly as they are in the parent system. |
|
| 35 |
+for SUBSYS in $(cut -d: -f2 /proc/1/cgroup); do |
|
| 36 |
+ mkdir -p "$CGROUP/$SUBSYS" |
|
| 37 |
+ if ! mountpoint -q $CGROUP/$SUBSYS; then |
|
| 38 |
+ mount -n -t cgroup -o "$SUBSYS" cgroup "$CGROUP/$SUBSYS" |
|
| 39 |
+ fi |
|
| 40 |
+ |
|
| 41 |
+ # The two following sections address a bug which manifests itself |
|
| 42 |
+ # by a cryptic "lxc-start: no ns_cgroup option specified" when |
|
| 43 |
+ # trying to start containers withina container. |
|
| 44 |
+ # The bug seems to appear when the cgroup hierarchies are not |
|
| 45 |
+ # mounted on the exact same directories in the host, and in the |
|
| 46 |
+ # container. |
|
| 47 |
+ |
|
| 48 |
+ # Named, control-less cgroups are mounted with "-o name=foo" |
|
| 49 |
+ # (and appear as such under /proc/<pid>/cgroup) but are usually |
|
| 50 |
+ # mounted on a directory named "foo" (without the "name=" prefix). |
|
| 51 |
+ # Systemd and OpenRC (and possibly others) both create such a |
|
| 52 |
+ # cgroup. To avoid the aforementioned bug, we symlink "foo" to |
|
| 53 |
+ # "name=foo". This shouldn't have any adverse effect. |
|
| 54 |
+ name="${SUBSYS#name=}"
|
|
| 55 |
+ if [ "$name" != "$SUBSYS" ]; then |
|
| 56 |
+ ln -s "$SUBSYS" "$CGROUP/$name" |
|
| 57 |
+ fi |
|
| 58 |
+ |
|
| 59 |
+ # Likewise, on at least one system, it has been reported that |
|
| 60 |
+ # systemd would mount the CPU and CPU accounting controllers |
|
| 61 |
+ # (respectively "cpu" and "cpuacct") with "-o cpuacct,cpu" |
|
| 62 |
+ # but on a directory called "cpu,cpuacct" (note the inversion |
|
| 63 |
+ # in the order of the groups). This tries to work around it. |
|
| 64 |
+ if [ "$SUBSYS" = 'cpuacct,cpu' ]; then |
|
| 65 |
+ ln -s "$SUBSYS" "$CGROUP/cpu,cpuacct" |
|
| 66 |
+ fi |
|
| 67 |
+done |
|
| 68 |
+ |
|
| 69 |
+# Note: as I write those lines, the LXC userland tools cannot setup |
|
| 70 |
+# a "sub-container" properly if the "devices" cgroup is not in its |
|
| 71 |
+# own hierarchy. Let's detect this and issue a warning. |
|
| 72 |
+if ! grep -q :devices: /proc/1/cgroup; then |
|
| 73 |
+ echo >&2 'WARNING: the "devices" cgroup should be in its own hierarchy.' |
|
| 74 |
+fi |
|
| 75 |
+if ! grep -qw devices /proc/1/cgroup; then |
|
| 76 |
+ echo >&2 'WARNING: it looks like the "devices" cgroup is not mounted.' |
|
| 77 |
+fi |
|
| 78 |
+ |
|
| 79 |
+# Mount /tmp |
|
| 80 |
+mount -t tmpfs none /tmp |
|
| 81 |
+ |
|
| 82 |
+if [ $# -gt 0 ]; then |
|
| 83 |
+ exec "$@" |
|
| 84 |
+fi |
|
| 85 |
+ |
|
| 86 |
+echo >&2 'ERROR: No command specified.' |
|
| 87 |
+echo >&2 'You probably want to run hack/make.sh, or maybe a shell?' |
| 0 | 88 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,15 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+cd "$(dirname "$(readlink -f "$BASH_SOURCE")")/.." |
|
| 4 |
+ |
|
| 5 |
+# see also ".mailmap" for how email addresses and names are deduplicated |
|
| 6 |
+ |
|
| 7 |
+{
|
|
| 8 |
+ cat <<-'EOH' |
|
| 9 |
+ # This file lists all individuals having contributed content to the repository. |
|
| 10 |
+ # For how it is generated, see `hack/generate-authors.sh`. |
|
| 11 |
+ EOH |
|
| 12 |
+ echo |
|
| 13 |
+ git log --format='%aN <%aE>' | sort -uf |
|
| 14 |
+} > AUTHORS |
| 0 | 15 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,62 @@ |
| 0 |
+#!/usr/bin/env bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+if [ $# -ne 1 ]; then |
|
| 4 |
+ echo >&2 "Usage: $0 PATH" |
|
| 5 |
+ echo >&2 "Show the primary and secondary maintainers for a given path" |
|
| 6 |
+ exit 1 |
|
| 7 |
+fi |
|
| 8 |
+ |
|
| 9 |
+set -e |
|
| 10 |
+ |
|
| 11 |
+DEST=$1 |
|
| 12 |
+DESTFILE="" |
|
| 13 |
+if [ ! -d $DEST ]; then |
|
| 14 |
+ DESTFILE=$(basename $DEST) |
|
| 15 |
+ DEST=$(dirname $DEST) |
|
| 16 |
+fi |
|
| 17 |
+ |
|
| 18 |
+MAINTAINERS=() |
|
| 19 |
+cd $DEST |
|
| 20 |
+while true; do |
|
| 21 |
+ if [ -e ./MAINTAINERS ]; then |
|
| 22 |
+ {
|
|
| 23 |
+ while read line; do |
|
| 24 |
+ re='^([^:]*): *(.*)$' |
|
| 25 |
+ file=$(echo $line | sed -E -n "s/$re/\1/p") |
|
| 26 |
+ if [ ! -z "$file" ]; then |
|
| 27 |
+ if [ "$file" = "$DESTFILE" ]; then |
|
| 28 |
+ echo "Override: $line" |
|
| 29 |
+ maintainer=$(echo $line | sed -E -n "s/$re/\2/p") |
|
| 30 |
+ MAINTAINERS=("$maintainer" "${MAINTAINERS[@]}")
|
|
| 31 |
+ fi |
|
| 32 |
+ else |
|
| 33 |
+ MAINTAINERS+=("$line");
|
|
| 34 |
+ fi |
|
| 35 |
+ done; |
|
| 36 |
+ } < MAINTAINERS |
|
| 37 |
+ break |
|
| 38 |
+ fi |
|
| 39 |
+ if [ -d .git ]; then |
|
| 40 |
+ break |
|
| 41 |
+ fi |
|
| 42 |
+ if [ "$(pwd)" = "/" ]; then |
|
| 43 |
+ break |
|
| 44 |
+ fi |
|
| 45 |
+ cd .. |
|
| 46 |
+done |
|
| 47 |
+ |
|
| 48 |
+PRIMARY="${MAINTAINERS[0]}"
|
|
| 49 |
+PRIMARY_FIRSTNAME=$(echo $PRIMARY | cut -d' ' -f1) |
|
| 50 |
+LGTM_COUNT=${#MAINTAINERS[@]}
|
|
| 51 |
+LGTM_COUNT=$((LGTM_COUNT%2 +1)) |
|
| 52 |
+ |
|
| 53 |
+firstname() {
|
|
| 54 |
+ echo $1 | cut -d' ' -f1 |
|
| 55 |
+} |
|
| 56 |
+ |
|
| 57 |
+echo "A pull request in $1 will need $LGTM_COUNT LGTM's to be merged." |
|
| 58 |
+echo "--- $PRIMARY is the PRIMARY MAINTAINER of $1." |
|
| 59 |
+for SECONDARY in "${MAINTAINERS[@]:1}"; do
|
|
| 60 |
+ echo "--- $SECONDARY" |
|
| 61 |
+done |
| 0 | 62 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,225 @@ |
| 0 |
+#!/bin/sh |
|
| 1 |
+set -e |
|
| 2 |
+# |
|
| 3 |
+# This script is meant for quick & easy install via: |
|
| 4 |
+# 'curl -sSL https://get.docker.com/ | sh' |
|
| 5 |
+# or: |
|
| 6 |
+# 'wget -qO- https://get.docker.com/ | sh' |
|
| 7 |
+# |
|
| 8 |
+# |
|
| 9 |
+# Docker Maintainers: |
|
| 10 |
+# To update this script on https://get.docker.com, |
|
| 11 |
+# use hack/release.sh during a normal release, |
|
| 12 |
+# or the following one-liner for script hotfixes: |
|
| 13 |
+# s3cmd put --acl-public -P hack/install.sh s3://get.docker.com/index |
|
| 14 |
+# |
|
| 15 |
+ |
|
| 16 |
+url='https://get.docker.com/' |
|
| 17 |
+ |
|
| 18 |
+command_exists() {
|
|
| 19 |
+ command -v "$@" > /dev/null 2>&1 |
|
| 20 |
+} |
|
| 21 |
+ |
|
| 22 |
+case "$(uname -m)" in |
|
| 23 |
+ *64) |
|
| 24 |
+ ;; |
|
| 25 |
+ *) |
|
| 26 |
+ echo >&2 'Error: you are not using a 64bit platform.' |
|
| 27 |
+ echo >&2 'Docker currently only supports 64bit platforms.' |
|
| 28 |
+ exit 1 |
|
| 29 |
+ ;; |
|
| 30 |
+esac |
|
| 31 |
+ |
|
| 32 |
+if command_exists docker || command_exists lxc-docker; then |
|
| 33 |
+ echo >&2 'Warning: "docker" or "lxc-docker" command appears to already exist.' |
|
| 34 |
+ echo >&2 'Please ensure that you do not already have docker installed.' |
|
| 35 |
+ echo >&2 'You may press Ctrl+C now to abort this process and rectify this situation.' |
|
| 36 |
+ ( set -x; sleep 20 ) |
|
| 37 |
+fi |
|
| 38 |
+ |
|
| 39 |
+user="$(id -un 2>/dev/null || true)" |
|
| 40 |
+ |
|
| 41 |
+sh_c='sh -c' |
|
| 42 |
+if [ "$user" != 'root' ]; then |
|
| 43 |
+ if command_exists sudo; then |
|
| 44 |
+ sh_c='sudo -E sh -c' |
|
| 45 |
+ elif command_exists su; then |
|
| 46 |
+ sh_c='su -c' |
|
| 47 |
+ else |
|
| 48 |
+ echo >&2 'Error: this installer needs the ability to run commands as root.' |
|
| 49 |
+ echo >&2 'We are unable to find either "sudo" or "su" available to make this happen.' |
|
| 50 |
+ exit 1 |
|
| 51 |
+ fi |
|
| 52 |
+fi |
|
| 53 |
+ |
|
| 54 |
+curl='' |
|
| 55 |
+if command_exists curl; then |
|
| 56 |
+ curl='curl -sSL' |
|
| 57 |
+elif command_exists wget; then |
|
| 58 |
+ curl='wget -qO-' |
|
| 59 |
+elif command_exists busybox && busybox --list-modules | grep -q wget; then |
|
| 60 |
+ curl='busybox wget -qO-' |
|
| 61 |
+fi |
|
| 62 |
+ |
|
| 63 |
+# perform some very rudimentary platform detection |
|
| 64 |
+lsb_dist='' |
|
| 65 |
+if command_exists lsb_release; then |
|
| 66 |
+ lsb_dist="$(lsb_release -si)" |
|
| 67 |
+fi |
|
| 68 |
+if [ -z "$lsb_dist" ] && [ -r /etc/lsb-release ]; then |
|
| 69 |
+ lsb_dist="$(. /etc/lsb-release && echo "$DISTRIB_ID")" |
|
| 70 |
+fi |
|
| 71 |
+if [ -z "$lsb_dist" ] && [ -r /etc/debian_version ]; then |
|
| 72 |
+ lsb_dist='debian' |
|
| 73 |
+fi |
|
| 74 |
+if [ -z "$lsb_dist" ] && [ -r /etc/fedora-release ]; then |
|
| 75 |
+ lsb_dist='fedora' |
|
| 76 |
+fi |
|
| 77 |
+if [ -z "$lsb_dist" ] && [ -r /etc/os-release ]; then |
|
| 78 |
+ lsb_dist="$(. /etc/os-release && echo "$ID")" |
|
| 79 |
+fi |
|
| 80 |
+ |
|
| 81 |
+lsb_dist="$(echo "$lsb_dist" | tr '[:upper:]' '[:lower:]')" |
|
| 82 |
+case "$lsb_dist" in |
|
| 83 |
+ amzn|fedora) |
|
| 84 |
+ if [ "$lsb_dist" = 'amzn' ]; then |
|
| 85 |
+ ( |
|
| 86 |
+ set -x |
|
| 87 |
+ $sh_c 'sleep 3; yum -y -q install docker' |
|
| 88 |
+ ) |
|
| 89 |
+ else |
|
| 90 |
+ ( |
|
| 91 |
+ set -x |
|
| 92 |
+ $sh_c 'sleep 3; yum -y -q install docker-io' |
|
| 93 |
+ ) |
|
| 94 |
+ fi |
|
| 95 |
+ if command_exists docker && [ -e /var/run/docker.sock ]; then |
|
| 96 |
+ ( |
|
| 97 |
+ set -x |
|
| 98 |
+ $sh_c 'docker version' |
|
| 99 |
+ ) || true |
|
| 100 |
+ fi |
|
| 101 |
+ your_user=your-user |
|
| 102 |
+ [ "$user" != 'root' ] && your_user="$user" |
|
| 103 |
+ echo |
|
| 104 |
+ echo 'If you would like to use Docker as a non-root user, you should now consider' |
|
| 105 |
+ echo 'adding your user to the "docker" group with something like:' |
|
| 106 |
+ echo |
|
| 107 |
+ echo ' sudo usermod -aG docker' $your_user |
|
| 108 |
+ echo |
|
| 109 |
+ echo 'Remember that you will have to log out and back in for this to take effect!' |
|
| 110 |
+ echo |
|
| 111 |
+ exit 0 |
|
| 112 |
+ ;; |
|
| 113 |
+ |
|
| 114 |
+ ubuntu|debian|linuxmint) |
|
| 115 |
+ export DEBIAN_FRONTEND=noninteractive |
|
| 116 |
+ |
|
| 117 |
+ did_apt_get_update= |
|
| 118 |
+ apt_get_update() {
|
|
| 119 |
+ if [ -z "$did_apt_get_update" ]; then |
|
| 120 |
+ ( set -x; $sh_c 'sleep 3; apt-get update' ) |
|
| 121 |
+ did_apt_get_update=1 |
|
| 122 |
+ fi |
|
| 123 |
+ } |
|
| 124 |
+ |
|
| 125 |
+ # aufs is preferred over devicemapper; try to ensure the driver is available. |
|
| 126 |
+ if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then |
|
| 127 |
+ kern_extras="linux-image-extra-$(uname -r)" |
|
| 128 |
+ |
|
| 129 |
+ apt_get_update |
|
| 130 |
+ ( set -x; $sh_c 'sleep 3; apt-get install -y -q '"$kern_extras" ) || true |
|
| 131 |
+ |
|
| 132 |
+ if ! grep -q aufs /proc/filesystems && ! $sh_c 'modprobe aufs'; then |
|
| 133 |
+ echo >&2 'Warning: tried to install '"$kern_extras"' (for AUFS)' |
|
| 134 |
+ echo >&2 ' but we still have no AUFS. Docker may not work. Proceeding anyways!' |
|
| 135 |
+ ( set -x; sleep 10 ) |
|
| 136 |
+ fi |
|
| 137 |
+ fi |
|
| 138 |
+ |
|
| 139 |
+ # install apparmor utils if they're missing and apparmor is enabled in the kernel |
|
| 140 |
+ # otherwise Docker will fail to start |
|
| 141 |
+ if [ "$(cat /sys/module/apparmor/parameters/enabled 2>/dev/null)" = 'Y' ]; then |
|
| 142 |
+ if command -v apparmor_parser &> /dev/null; then |
|
| 143 |
+ echo 'apparmor is enabled in the kernel and apparmor utils were already installed' |
|
| 144 |
+ else |
|
| 145 |
+ echo 'apparmor is enabled in the kernel, but apparmor_parser missing' |
|
| 146 |
+ apt_get_update |
|
| 147 |
+ ( set -x; $sh_c 'sleep 3; apt-get install -y -q apparmor' ) |
|
| 148 |
+ fi |
|
| 149 |
+ fi |
|
| 150 |
+ |
|
| 151 |
+ if [ ! -e /usr/lib/apt/methods/https ]; then |
|
| 152 |
+ apt_get_update |
|
| 153 |
+ ( set -x; $sh_c 'sleep 3; apt-get install -y -q apt-transport-https' ) |
|
| 154 |
+ fi |
|
| 155 |
+ if [ -z "$curl" ]; then |
|
| 156 |
+ apt_get_update |
|
| 157 |
+ ( set -x; $sh_c 'sleep 3; apt-get install -y -q curl' ) |
|
| 158 |
+ curl='curl -sSL' |
|
| 159 |
+ fi |
|
| 160 |
+ ( |
|
| 161 |
+ set -x |
|
| 162 |
+ if [ "https://get.docker.com/" = "$url" ]; then |
|
| 163 |
+ $sh_c "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 36A1D7869245C8950F966E92D8576A8BA88D21E9" |
|
| 164 |
+ elif [ "https://test.docker.com/" = "$url" ]; then |
|
| 165 |
+ $sh_c "apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 740B314AE3941731B942C66ADF4FD13717AAD7D6" |
|
| 166 |
+ else |
|
| 167 |
+ $sh_c "$curl ${url}gpg | apt-key add -"
|
|
| 168 |
+ fi |
|
| 169 |
+ $sh_c "echo deb ${url}ubuntu docker main > /etc/apt/sources.list.d/docker.list"
|
|
| 170 |
+ $sh_c 'sleep 3; apt-get update; apt-get install -y -q lxc-docker' |
|
| 171 |
+ ) |
|
| 172 |
+ if command_exists docker && [ -e /var/run/docker.sock ]; then |
|
| 173 |
+ ( |
|
| 174 |
+ set -x |
|
| 175 |
+ $sh_c 'docker version' |
|
| 176 |
+ ) || true |
|
| 177 |
+ fi |
|
| 178 |
+ your_user=your-user |
|
| 179 |
+ [ "$user" != 'root' ] && your_user="$user" |
|
| 180 |
+ echo |
|
| 181 |
+ echo 'If you would like to use Docker as a non-root user, you should now consider' |
|
| 182 |
+ echo 'adding your user to the "docker" group with something like:' |
|
| 183 |
+ echo |
|
| 184 |
+ echo ' sudo usermod -aG docker' $your_user |
|
| 185 |
+ echo |
|
| 186 |
+ echo 'Remember that you will have to log out and back in for this to take effect!' |
|
| 187 |
+ echo |
|
| 188 |
+ exit 0 |
|
| 189 |
+ ;; |
|
| 190 |
+ |
|
| 191 |
+ gentoo) |
|
| 192 |
+ if [ "$url" = "https://test.docker.com/" ]; then |
|
| 193 |
+ echo >&2 |
|
| 194 |
+ echo >&2 ' You appear to be trying to install the latest nightly build in Gentoo.' |
|
| 195 |
+ echo >&2 ' The portage tree should contain the latest stable release of Docker, but' |
|
| 196 |
+ echo >&2 ' if you want something more recent, you can always use the live ebuild' |
|
| 197 |
+ echo >&2 ' provided in the "docker" overlay available via layman. For more' |
|
| 198 |
+ echo >&2 ' instructions, please see the following URL:' |
|
| 199 |
+ echo >&2 ' https://github.com/tianon/docker-overlay#using-this-overlay' |
|
| 200 |
+ echo >&2 ' After adding the "docker" overlay, you should be able to:' |
|
| 201 |
+ echo >&2 ' emerge -av =app-emulation/docker-9999' |
|
| 202 |
+ echo >&2 |
|
| 203 |
+ exit 1 |
|
| 204 |
+ fi |
|
| 205 |
+ |
|
| 206 |
+ ( |
|
| 207 |
+ set -x |
|
| 208 |
+ $sh_c 'sleep 3; emerge app-emulation/docker' |
|
| 209 |
+ ) |
|
| 210 |
+ exit 0 |
|
| 211 |
+ ;; |
|
| 212 |
+esac |
|
| 213 |
+ |
|
| 214 |
+cat >&2 <<'EOF' |
|
| 215 |
+ |
|
| 216 |
+ Either your platform is not easily detectable, is not supported by this |
|
| 217 |
+ installer script (yet - PRs welcome! [hack/install.sh]), or does not yet have |
|
| 218 |
+ a package for Docker. Please visit the following URL for more detailed |
|
| 219 |
+ installation instructions: |
|
| 220 |
+ |
|
| 221 |
+ https://docs.docker.com/en/latest/installation/ |
|
| 222 |
+ |
|
| 223 |
+EOF |
|
| 224 |
+exit 1 |
| 0 | 225 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,242 @@ |
| 0 |
+#!/usr/bin/env bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+# This script builds various binary artifacts from a checkout of the docker |
|
| 4 |
+# source code. |
|
| 5 |
+# |
|
| 6 |
+# Requirements: |
|
| 7 |
+# - The current directory should be a checkout of the docker source code |
|
| 8 |
+# (http://github.com/docker/docker). Whatever version is checked out |
|
| 9 |
+# will be built. |
|
| 10 |
+# - The VERSION file, at the root of the repository, should exist, and |
|
| 11 |
+# will be used as Docker binary version and package version. |
|
| 12 |
+# - The hash of the git commit will also be included in the Docker binary, |
|
| 13 |
+# with the suffix -dirty if the repository isn't clean. |
|
| 14 |
+# - The script is intented to be run inside the docker container specified |
|
| 15 |
+# in the Dockerfile at the root of the source. In other words: |
|
| 16 |
+# DO NOT CALL THIS SCRIPT DIRECTLY. |
|
| 17 |
+# - The right way to call this script is to invoke "make" from |
|
| 18 |
+# your checkout of the Docker repository. |
|
| 19 |
+# the Makefile will do a "docker build -t docker ." and then |
|
| 20 |
+# "docker run hack/make.sh" in the resulting image. |
|
| 21 |
+# |
|
| 22 |
+ |
|
| 23 |
+set -o pipefail |
|
| 24 |
+ |
|
| 25 |
+export DOCKER_PKG='github.com/docker/docker' |
|
| 26 |
+ |
|
| 27 |
+# We're a nice, sexy, little shell script, and people might try to run us; |
|
| 28 |
+# but really, they shouldn't. We want to be in a container! |
|
| 29 |
+if [ "$(pwd)" != "/go/src/$DOCKER_PKG" ] || [ -z "$DOCKER_CROSSPLATFORMS" ]; then |
|
| 30 |
+ {
|
|
| 31 |
+ echo "# WARNING! I don't seem to be running in the Docker container." |
|
| 32 |
+ echo "# The result of this command might be an incorrect build, and will not be" |
|
| 33 |
+ echo "# officially supported." |
|
| 34 |
+ echo "#" |
|
| 35 |
+ echo "# Try this instead: make all" |
|
| 36 |
+ echo "#" |
|
| 37 |
+ } >&2 |
|
| 38 |
+fi |
|
| 39 |
+ |
|
| 40 |
+echo |
|
| 41 |
+ |
|
| 42 |
+# List of bundles to create when no argument is passed |
|
| 43 |
+DEFAULT_BUNDLES=( |
|
| 44 |
+ validate-dco |
|
| 45 |
+ validate-gofmt |
|
| 46 |
+ |
|
| 47 |
+ binary |
|
| 48 |
+ |
|
| 49 |
+ test-unit |
|
| 50 |
+ test-integration |
|
| 51 |
+ test-integration-cli |
|
| 52 |
+ |
|
| 53 |
+ dynbinary |
|
| 54 |
+ dyntest-unit |
|
| 55 |
+ dyntest-integration |
|
| 56 |
+ |
|
| 57 |
+ cover |
|
| 58 |
+ cross |
|
| 59 |
+ tgz |
|
| 60 |
+ ubuntu |
|
| 61 |
+) |
|
| 62 |
+ |
|
| 63 |
+VERSION=$(cat ./VERSION) |
|
| 64 |
+if command -v git &> /dev/null && git rev-parse &> /dev/null; then |
|
| 65 |
+ GITCOMMIT=$(git rev-parse --short HEAD) |
|
| 66 |
+ if [ -n "$(git status --porcelain --untracked-files=no)" ]; then |
|
| 67 |
+ GITCOMMIT="$GITCOMMIT-dirty" |
|
| 68 |
+ fi |
|
| 69 |
+elif [ "$DOCKER_GITCOMMIT" ]; then |
|
| 70 |
+ GITCOMMIT="$DOCKER_GITCOMMIT" |
|
| 71 |
+else |
|
| 72 |
+ echo >&2 'error: .git directory missing and DOCKER_GITCOMMIT not specified' |
|
| 73 |
+ echo >&2 ' Please either build with the .git directory accessible, or specify the' |
|
| 74 |
+ echo >&2 ' exact (--short) commit hash you are building using DOCKER_GITCOMMIT for' |
|
| 75 |
+ echo >&2 ' future accountability in diagnosing build issues. Thanks!' |
|
| 76 |
+ exit 1 |
|
| 77 |
+fi |
|
| 78 |
+ |
|
| 79 |
+if [ "$AUTO_GOPATH" ]; then |
|
| 80 |
+ rm -rf .gopath |
|
| 81 |
+ mkdir -p .gopath/src/"$(dirname "${DOCKER_PKG}")"
|
|
| 82 |
+ ln -sf ../../../.. .gopath/src/"${DOCKER_PKG}"
|
|
| 83 |
+ export GOPATH="$(pwd)/.gopath:$(pwd)/vendor" |
|
| 84 |
+fi |
|
| 85 |
+ |
|
| 86 |
+if [ ! "$GOPATH" ]; then |
|
| 87 |
+ echo >&2 'error: missing GOPATH; please see http://golang.org/doc/code.html#GOPATH' |
|
| 88 |
+ echo >&2 ' alternatively, set AUTO_GOPATH=1' |
|
| 89 |
+ exit 1 |
|
| 90 |
+fi |
|
| 91 |
+ |
|
| 92 |
+if [ -z "$DOCKER_CLIENTONLY" ]; then |
|
| 93 |
+ DOCKER_BUILDTAGS+=" daemon" |
|
| 94 |
+fi |
|
| 95 |
+ |
|
| 96 |
+# Use these flags when compiling the tests and final binary |
|
| 97 |
+LDFLAGS=' |
|
| 98 |
+ -w |
|
| 99 |
+ -X '$DOCKER_PKG'/dockerversion.GITCOMMIT "'$GITCOMMIT'" |
|
| 100 |
+ -X '$DOCKER_PKG'/dockerversion.VERSION "'$VERSION'" |
|
| 101 |
+' |
|
| 102 |
+LDFLAGS_STATIC='-linkmode external' |
|
| 103 |
+EXTLDFLAGS_STATIC='-static' |
|
| 104 |
+# ORIG_BUILDFLAGS is necessary for the cross target which cannot always build |
|
| 105 |
+# with options like -race. |
|
| 106 |
+ORIG_BUILDFLAGS=( -a -tags "netgo static_build $DOCKER_BUILDTAGS" ) |
|
| 107 |
+BUILDFLAGS=( $BUILDFLAGS "${ORIG_BUILDFLAGS[@]}" )
|
|
| 108 |
+# Test timeout. |
|
| 109 |
+: ${TIMEOUT:=30m}
|
|
| 110 |
+TESTFLAGS+=" -test.timeout=${TIMEOUT}"
|
|
| 111 |
+ |
|
| 112 |
+# A few more flags that are specific just to building a completely-static binary (see hack/make/binary) |
|
| 113 |
+# PLEASE do not use these anywhere else. |
|
| 114 |
+EXTLDFLAGS_STATIC_DOCKER="$EXTLDFLAGS_STATIC -lpthread -Wl,--unresolved-symbols=ignore-in-object-files" |
|
| 115 |
+LDFLAGS_STATIC_DOCKER=" |
|
| 116 |
+ $LDFLAGS_STATIC |
|
| 117 |
+ -X $DOCKER_PKG/dockerversion.IAMSTATIC true |
|
| 118 |
+ -extldflags \"$EXTLDFLAGS_STATIC_DOCKER\" |
|
| 119 |
+" |
|
| 120 |
+ |
|
| 121 |
+if [ "$(uname -s)" = 'FreeBSD' ]; then |
|
| 122 |
+ # Tell cgo the compiler is Clang, not GCC |
|
| 123 |
+ # https://code.google.com/p/go/source/browse/src/cmd/cgo/gcc.go?spec=svne77e74371f2340ee08622ce602e9f7b15f29d8d3&r=e6794866ebeba2bf8818b9261b54e2eef1c9e588#752 |
|
| 124 |
+ export CC=clang |
|
| 125 |
+ |
|
| 126 |
+ # "-extld clang" is a workaround for |
|
| 127 |
+ # https://code.google.com/p/go/issues/detail?id=6845 |
|
| 128 |
+ LDFLAGS="$LDFLAGS -extld clang" |
|
| 129 |
+fi |
|
| 130 |
+ |
|
| 131 |
+# If sqlite3.h doesn't exist under /usr/include, |
|
| 132 |
+# check /usr/local/include also just in case |
|
| 133 |
+# (e.g. FreeBSD Ports installs it under the directory) |
|
| 134 |
+if [ ! -e /usr/include/sqlite3.h ] && [ -e /usr/local/include/sqlite3.h ]; then |
|
| 135 |
+ export CGO_CFLAGS='-I/usr/local/include' |
|
| 136 |
+ export CGO_LDFLAGS='-L/usr/local/lib' |
|
| 137 |
+fi |
|
| 138 |
+ |
|
| 139 |
+HAVE_GO_TEST_COVER= |
|
| 140 |
+if \ |
|
| 141 |
+ go help testflag | grep -- -cover > /dev/null \ |
|
| 142 |
+ && go tool -n cover > /dev/null 2>&1 \ |
|
| 143 |
+; then |
|
| 144 |
+ HAVE_GO_TEST_COVER=1 |
|
| 145 |
+fi |
|
| 146 |
+ |
|
| 147 |
+# If $TESTFLAGS is set in the environment, it is passed as extra arguments to 'go test'. |
|
| 148 |
+# You can use this to select certain tests to run, eg. |
|
| 149 |
+# |
|
| 150 |
+# TESTFLAGS='-run ^TestBuild$' ./hack/make.sh test |
|
| 151 |
+# |
|
| 152 |
+go_test_dir() {
|
|
| 153 |
+ dir=$1 |
|
| 154 |
+ coverpkg=$2 |
|
| 155 |
+ testcover=() |
|
| 156 |
+ if [ "$HAVE_GO_TEST_COVER" ]; then |
|
| 157 |
+ # if our current go install has -cover, we want to use it :) |
|
| 158 |
+ mkdir -p "$DEST/coverprofiles" |
|
| 159 |
+ coverprofile="docker${dir#.}"
|
|
| 160 |
+ coverprofile="$DEST/coverprofiles/${coverprofile//\//-}"
|
|
| 161 |
+ testcover=( -cover -coverprofile "$coverprofile" $coverpkg ) |
|
| 162 |
+ fi |
|
| 163 |
+ ( |
|
| 164 |
+ export DEST |
|
| 165 |
+ echo '+ go test' $TESTFLAGS "${DOCKER_PKG}${dir#.}"
|
|
| 166 |
+ cd "$dir" |
|
| 167 |
+ go test ${testcover[@]} -ldflags "$LDFLAGS" "${BUILDFLAGS[@]}" $TESTFLAGS
|
|
| 168 |
+ ) |
|
| 169 |
+} |
|
| 170 |
+ |
|
| 171 |
+# This helper function walks the current directory looking for directories |
|
| 172 |
+# holding certain files ($1 parameter), and prints their paths on standard |
|
| 173 |
+# output, one per line. |
|
| 174 |
+find_dirs() {
|
|
| 175 |
+ find . -not \( \ |
|
| 176 |
+ \( \ |
|
| 177 |
+ -wholename './vendor' \ |
|
| 178 |
+ -o -wholename './integration' \ |
|
| 179 |
+ -o -wholename './integration-cli' \ |
|
| 180 |
+ -o -wholename './contrib' \ |
|
| 181 |
+ -o -wholename './pkg/mflag/example' \ |
|
| 182 |
+ -o -wholename './.git' \ |
|
| 183 |
+ -o -wholename './bundles' \ |
|
| 184 |
+ -o -wholename './docs' \ |
|
| 185 |
+ -o -wholename './pkg/libcontainer/nsinit' \ |
|
| 186 |
+ \) \ |
|
| 187 |
+ -prune \ |
|
| 188 |
+ \) -name "$1" -print0 | xargs -0n1 dirname | sort -u |
|
| 189 |
+} |
|
| 190 |
+ |
|
| 191 |
+hash_files() {
|
|
| 192 |
+ while [ $# -gt 0 ]; do |
|
| 193 |
+ f="$1" |
|
| 194 |
+ shift |
|
| 195 |
+ dir="$(dirname "$f")" |
|
| 196 |
+ base="$(basename "$f")" |
|
| 197 |
+ for hashAlgo in md5 sha256; do |
|
| 198 |
+ if command -v "${hashAlgo}sum" &> /dev/null; then
|
|
| 199 |
+ ( |
|
| 200 |
+ # subshell and cd so that we get output files like: |
|
| 201 |
+ # $HASH docker-$VERSION |
|
| 202 |
+ # instead of: |
|
| 203 |
+ # $HASH /go/src/github.com/.../$VERSION/binary/docker-$VERSION |
|
| 204 |
+ cd "$dir" |
|
| 205 |
+ "${hashAlgo}sum" "$base" > "$base.$hashAlgo"
|
|
| 206 |
+ ) |
|
| 207 |
+ fi |
|
| 208 |
+ done |
|
| 209 |
+ done |
|
| 210 |
+} |
|
| 211 |
+ |
|
| 212 |
+bundle() {
|
|
| 213 |
+ bundlescript=$1 |
|
| 214 |
+ bundle=$(basename $bundlescript) |
|
| 215 |
+ echo "---> Making bundle: $bundle (in bundles/$VERSION/$bundle)" |
|
| 216 |
+ mkdir -p bundles/$VERSION/$bundle |
|
| 217 |
+ source $bundlescript $(pwd)/bundles/$VERSION/$bundle |
|
| 218 |
+} |
|
| 219 |
+ |
|
| 220 |
+main() {
|
|
| 221 |
+ # We want this to fail if the bundles already exist and cannot be removed. |
|
| 222 |
+ # This is to avoid mixing bundles from different versions of the code. |
|
| 223 |
+ mkdir -p bundles |
|
| 224 |
+ if [ -e "bundles/$VERSION" ]; then |
|
| 225 |
+ echo "bundles/$VERSION already exists. Removing." |
|
| 226 |
+ rm -fr bundles/$VERSION && mkdir bundles/$VERSION || exit 1 |
|
| 227 |
+ echo |
|
| 228 |
+ fi |
|
| 229 |
+ SCRIPTDIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
|
| 230 |
+ if [ $# -lt 1 ]; then |
|
| 231 |
+ bundles=(${DEFAULT_BUNDLES[@]})
|
|
| 232 |
+ else |
|
| 233 |
+ bundles=($@) |
|
| 234 |
+ fi |
|
| 235 |
+ for bundle in ${bundles[@]}; do
|
|
| 236 |
+ bundle $SCRIPTDIR/make/$bundle |
|
| 237 |
+ echo |
|
| 238 |
+ done |
|
| 239 |
+} |
|
| 240 |
+ |
|
| 241 |
+main "$@" |
| 0 | 242 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,10 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+ |
|
| 2 |
+if ! docker inspect busybox &> /dev/null; then |
|
| 3 |
+ if [ -d /docker-busybox ]; then |
|
| 4 |
+ source "$(dirname "$BASH_SOURCE")/.ensure-scratch" |
|
| 5 |
+ ( set -x; docker build -t busybox /docker-busybox ) |
|
| 6 |
+ else |
|
| 7 |
+ ( set -x; docker pull busybox ) |
|
| 8 |
+ fi |
|
| 9 |
+fi |
| 0 | 10 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,21 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+ |
|
| 2 |
+if ! docker inspect scratch &> /dev/null; then |
|
| 3 |
+ # let's build a "docker save" tarball for "scratch" |
|
| 4 |
+ # see https://github.com/docker/docker/pull/5262 |
|
| 5 |
+ # and also https://github.com/docker/docker/issues/4242 |
|
| 6 |
+ mkdir -p /docker-scratch |
|
| 7 |
+ ( |
|
| 8 |
+ cd /docker-scratch |
|
| 9 |
+ echo '{"scratch":{"latest":"511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158"}}' > repositories
|
|
| 10 |
+ mkdir -p 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 |
|
| 11 |
+ ( |
|
| 12 |
+ cd 511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158 |
|
| 13 |
+ echo '{"id":"511136ea3c5a64f264b78b5433614aec563103b4d4702f3ba7d4d2698e22c158","comment":"Imported from -","created":"2013-06-13T14:03:50.821769-07:00","container_config":{"Hostname":"","Domainname":"","User":"","Memory":0,"MemorySwap":0,"CpuShares":0,"AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"PortSpecs":null,"ExposedPorts":null,"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":null,"Cmd":null,"Image":"","Volumes":null,"WorkingDir":"","Entrypoint":null,"NetworkDisabled":false,"OnBuild":null},"docker_version":"0.4.0","architecture":"x86_64","Size":0}' > json
|
|
| 14 |
+ echo '1.0' > VERSION |
|
| 15 |
+ tar -cf layer.tar --files-from /dev/null |
|
| 16 |
+ ) |
|
| 17 |
+ ) |
|
| 18 |
+ ( set -x; tar -cf /docker-scratch.tar -C /docker-scratch . ) |
|
| 19 |
+ ( set -x; docker load --input /docker-scratch.tar ) |
|
| 20 |
+fi |
| 0 | 21 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,26 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+# Compile phase run by parallel in test-unit. No support for coverpkg |
|
| 4 |
+ |
|
| 5 |
+dir=$1 |
|
| 6 |
+out_file="$DEST/precompiled/$dir.test" |
|
| 7 |
+testcover=() |
|
| 8 |
+if [ "$HAVE_GO_TEST_COVER" ]; then |
|
| 9 |
+ # if our current go install has -cover, we want to use it :) |
|
| 10 |
+ mkdir -p "$DEST/coverprofiles" |
|
| 11 |
+ coverprofile="docker${dir#.}"
|
|
| 12 |
+ coverprofile="$DEST/coverprofiles/${coverprofile//\//-}"
|
|
| 13 |
+ testcover=( -cover -coverprofile "$coverprofile" ) # missing $coverpkg |
|
| 14 |
+fi |
|
| 15 |
+if [ "$BUILDFLAGS_FILE" ]; then |
|
| 16 |
+ readarray -t BUILDFLAGS < "$BUILDFLAGS_FILE" |
|
| 17 |
+fi |
|
| 18 |
+( |
|
| 19 |
+ cd "$dir" |
|
| 20 |
+ go test "${testcover[@]}" -ldflags "$LDFLAGS" "${BUILDFLAGS[@]}" $TESTFLAGS -c
|
|
| 21 |
+) |
|
| 22 |
+[ $? -ne 0 ] && return 1 |
|
| 23 |
+mkdir -p "$(dirname "$out_file")" |
|
| 24 |
+mv "$dir/$(basename "$dir").test" "$out_file" |
|
| 25 |
+echo "Precompiled: ${DOCKER_PKG}${dir#.}"
|
| 0 | 26 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,33 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+ |
|
| 2 |
+if [ -z "$VALIDATE_UPSTREAM" ]; then |
|
| 3 |
+ # this is kind of an expensive check, so let's not do this twice if we |
|
| 4 |
+ # are running more than one validate bundlescript |
|
| 5 |
+ |
|
| 6 |
+ VALIDATE_REPO='https://github.com/docker/docker.git' |
|
| 7 |
+ VALIDATE_BRANCH='master' |
|
| 8 |
+ |
|
| 9 |
+ if [ "$TRAVIS" = 'true' -a "$TRAVIS_PULL_REQUEST" != 'false' ]; then |
|
| 10 |
+ VALIDATE_REPO="https://github.com/${TRAVIS_REPO_SLUG}.git"
|
|
| 11 |
+ VALIDATE_BRANCH="${TRAVIS_BRANCH}"
|
|
| 12 |
+ fi |
|
| 13 |
+ |
|
| 14 |
+ VALIDATE_HEAD="$(git rev-parse --verify HEAD)" |
|
| 15 |
+ |
|
| 16 |
+ git fetch -q "$VALIDATE_REPO" "refs/heads/$VALIDATE_BRANCH" |
|
| 17 |
+ VALIDATE_UPSTREAM="$(git rev-parse --verify FETCH_HEAD)" |
|
| 18 |
+ |
|
| 19 |
+ VALIDATE_COMMIT_LOG="$VALIDATE_UPSTREAM..$VALIDATE_HEAD" |
|
| 20 |
+ VALIDATE_COMMIT_DIFF="$VALIDATE_UPSTREAM...$VALIDATE_HEAD" |
|
| 21 |
+ |
|
| 22 |
+ validate_diff() {
|
|
| 23 |
+ if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then |
|
| 24 |
+ git diff "$VALIDATE_COMMIT_DIFF" "$@" |
|
| 25 |
+ fi |
|
| 26 |
+ } |
|
| 27 |
+ validate_log() {
|
|
| 28 |
+ if [ "$VALIDATE_UPSTREAM" != "$VALIDATE_HEAD" ]; then |
|
| 29 |
+ git log "$VALIDATE_COMMIT_LOG" "$@" |
|
| 30 |
+ fi |
|
| 31 |
+ } |
|
| 32 |
+fi |
| 0 | 33 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,17 @@ |
| 0 |
+This directory holds scripts called by `make.sh` in the parent directory. |
|
| 1 |
+ |
|
| 2 |
+Each script is named after the bundle it creates. |
|
| 3 |
+They should not be called directly - instead, pass it as argument to make.sh, for example: |
|
| 4 |
+ |
|
| 5 |
+``` |
|
| 6 |
+./hack/make.sh test |
|
| 7 |
+./hack/make.sh binary ubuntu |
|
| 8 |
+ |
|
| 9 |
+# Or to run all bundles: |
|
| 10 |
+./hack/make.sh |
|
| 11 |
+``` |
|
| 12 |
+ |
|
| 13 |
+To add a bundle: |
|
| 14 |
+ |
|
| 15 |
+* Create a shell-compatible file here |
|
| 16 |
+* Add it to $DEFAULT_BUNDLES in make.sh |
| 0 | 17 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,17 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+DEST=$1 |
|
| 4 |
+ |
|
| 5 |
+go build \ |
|
| 6 |
+ -o "$DEST/docker-$VERSION" \ |
|
| 7 |
+ "${BUILDFLAGS[@]}" \
|
|
| 8 |
+ -ldflags " |
|
| 9 |
+ $LDFLAGS |
|
| 10 |
+ $LDFLAGS_STATIC_DOCKER |
|
| 11 |
+ " \ |
|
| 12 |
+ ./docker |
|
| 13 |
+echo "Created binary: $DEST/docker-$VERSION" |
|
| 14 |
+ln -sf "docker-$VERSION" "$DEST/docker" |
|
| 15 |
+ |
|
| 16 |
+hash_files "$DEST/docker-$VERSION" |
| 0 | 17 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,22 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+DEST="$1" |
|
| 4 |
+ |
|
| 5 |
+bundle_cover() {
|
|
| 6 |
+ coverprofiles=( "$DEST/../"*"/coverprofiles/"* ) |
|
| 7 |
+ for p in "${coverprofiles[@]}"; do
|
|
| 8 |
+ echo |
|
| 9 |
+ ( |
|
| 10 |
+ set -x |
|
| 11 |
+ go tool cover -func="$p" |
|
| 12 |
+ ) |
|
| 13 |
+ done |
|
| 14 |
+} |
|
| 15 |
+ |
|
| 16 |
+if [ "$HAVE_GO_TEST_COVER" ]; then |
|
| 17 |
+ bundle_cover 2>&1 | tee "$DEST/report.log" |
|
| 18 |
+else |
|
| 19 |
+ echo >&2 'warning: the current version of go does not support -cover' |
|
| 20 |
+ echo >&2 ' skipping test coverage report' |
|
| 21 |
+fi |
| 0 | 22 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,33 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+DEST=$1 |
|
| 4 |
+ |
|
| 5 |
+# explicit list of os/arch combos that support being a daemon |
|
| 6 |
+declare -A daemonSupporting |
|
| 7 |
+daemonSupporting=( |
|
| 8 |
+ [linux/amd64]=1 |
|
| 9 |
+) |
|
| 10 |
+ |
|
| 11 |
+# if we have our linux/amd64 version compiled, let's symlink it in |
|
| 12 |
+if [ -x "$DEST/../binary/docker-$VERSION" ]; then |
|
| 13 |
+ mkdir -p "$DEST/linux/amd64" |
|
| 14 |
+ ( |
|
| 15 |
+ cd "$DEST/linux/amd64" |
|
| 16 |
+ ln -s ../../../binary/* ./ |
|
| 17 |
+ ) |
|
| 18 |
+ echo "Created symlinks:" "$DEST/linux/amd64/"* |
|
| 19 |
+fi |
|
| 20 |
+ |
|
| 21 |
+for platform in $DOCKER_CROSSPLATFORMS; do |
|
| 22 |
+ ( |
|
| 23 |
+ mkdir -p "$DEST/$platform" # bundles/VERSION/cross/GOOS/GOARCH/docker-VERSION |
|
| 24 |
+ export GOOS=${platform%/*}
|
|
| 25 |
+ export GOARCH=${platform##*/}
|
|
| 26 |
+ if [ -z "${daemonSupporting[$platform]}" ]; then
|
|
| 27 |
+ export LDFLAGS_STATIC_DOCKER="" # we just need a simple client for these platforms |
|
| 28 |
+ export BUILDFLAGS=( "${ORIG_BUILDFLAGS[@]/ daemon/}" ) # remove the "daemon" build tag from platforms that aren't supported
|
|
| 29 |
+ fi |
|
| 30 |
+ source "$(dirname "$BASH_SOURCE")/binary" "$DEST/$platform" |
|
| 31 |
+ ) |
|
| 32 |
+done |
| 0 | 33 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,45 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+DEST=$1 |
|
| 4 |
+ |
|
| 5 |
+if [ -z "$DOCKER_CLIENTONLY" ]; then |
|
| 6 |
+ # dockerinit still needs to be a static binary, even if docker is dynamic |
|
| 7 |
+ go build \ |
|
| 8 |
+ -o "$DEST/dockerinit-$VERSION" \ |
|
| 9 |
+ "${BUILDFLAGS[@]}" \
|
|
| 10 |
+ -ldflags " |
|
| 11 |
+ $LDFLAGS |
|
| 12 |
+ $LDFLAGS_STATIC |
|
| 13 |
+ -extldflags \"$EXTLDFLAGS_STATIC\" |
|
| 14 |
+ " \ |
|
| 15 |
+ ./dockerinit |
|
| 16 |
+ echo "Created binary: $DEST/dockerinit-$VERSION" |
|
| 17 |
+ ln -sf "dockerinit-$VERSION" "$DEST/dockerinit" |
|
| 18 |
+ |
|
| 19 |
+ hash_files "$DEST/dockerinit-$VERSION" |
|
| 20 |
+ |
|
| 21 |
+ sha1sum= |
|
| 22 |
+ if command -v sha1sum &> /dev/null; then |
|
| 23 |
+ sha1sum=sha1sum |
|
| 24 |
+ elif command -v shasum &> /dev/null; then |
|
| 25 |
+ # Mac OS X - why couldn't they just use the same command name and be happy? |
|
| 26 |
+ sha1sum=shasum |
|
| 27 |
+ else |
|
| 28 |
+ echo >&2 'error: cannot find sha1sum command or equivalent' |
|
| 29 |
+ exit 1 |
|
| 30 |
+ fi |
|
| 31 |
+ |
|
| 32 |
+ # sha1 our new dockerinit to ensure separate docker and dockerinit always run in a perfect pair compiled for one another |
|
| 33 |
+ export DOCKER_INITSHA1="$($sha1sum $DEST/dockerinit-$VERSION | cut -d' ' -f1)" |
|
| 34 |
+else |
|
| 35 |
+ # DOCKER_CLIENTONLY must be truthy, so we don't need to bother with dockerinit :) |
|
| 36 |
+ export DOCKER_INITSHA1="" |
|
| 37 |
+fi |
|
| 38 |
+# exported so that "dyntest" can easily access it later without recalculating it |
|
| 39 |
+ |
|
| 40 |
+( |
|
| 41 |
+ export LDFLAGS_STATIC_DOCKER="-X $DOCKER_PKG/dockerversion.INITSHA1 \"$DOCKER_INITSHA1\" -X $DOCKER_PKG/dockerversion.INITPATH \"$DOCKER_INITPATH\"" |
|
| 42 |
+ export BUILDFLAGS=( "${BUILDFLAGS[@]/netgo /}" ) # disable netgo, since we don't need it for a dynamic binary
|
|
| 43 |
+ source "$(dirname "$BASH_SOURCE")/binary" |
|
| 44 |
+) |
| 0 | 45 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,18 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+DEST=$1 |
|
| 4 |
+INIT=$DEST/../dynbinary/dockerinit-$VERSION |
|
| 5 |
+ |
|
| 6 |
+if [ ! -x "$INIT" ]; then |
|
| 7 |
+ echo >&2 'error: dynbinary must be run before dyntest-integration' |
|
| 8 |
+ false |
|
| 9 |
+fi |
|
| 10 |
+ |
|
| 11 |
+( |
|
| 12 |
+ export TEST_DOCKERINIT_PATH="$INIT" |
|
| 13 |
+ export LDFLAGS_STATIC_DOCKER=" |
|
| 14 |
+ -X $DOCKER_PKG/dockerversion.INITSHA1 \"$DOCKER_INITSHA1\" |
|
| 15 |
+ " |
|
| 16 |
+ source "$(dirname "$BASH_SOURCE")/test-integration" |
|
| 17 |
+) |
| 0 | 18 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,18 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+DEST=$1 |
|
| 4 |
+INIT=$DEST/../dynbinary/dockerinit-$VERSION |
|
| 5 |
+ |
|
| 6 |
+if [ ! -x "$INIT" ]; then |
|
| 7 |
+ echo >&2 'error: dynbinary must be run before dyntest-unit' |
|
| 8 |
+ false |
|
| 9 |
+fi |
|
| 10 |
+ |
|
| 11 |
+( |
|
| 12 |
+ export TEST_DOCKERINIT_PATH="$INIT" |
|
| 13 |
+ export LDFLAGS_STATIC_DOCKER=" |
|
| 14 |
+ -X $DOCKER_PKG/dockerversion.INITSHA1 \"$DOCKER_INITSHA1\" |
|
| 15 |
+ " |
|
| 16 |
+ source "$(dirname "$BASH_SOURCE")/test-unit" |
|
| 17 |
+) |
| 0 | 18 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,15 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+DEST=$1 |
|
| 4 |
+ |
|
| 5 |
+bundle_test_integration() {
|
|
| 6 |
+ LDFLAGS="$LDFLAGS $LDFLAGS_STATIC_DOCKER" go_test_dir ./integration \ |
|
| 7 |
+ "-coverpkg $(find_dirs '*.go' | sed 's,^\.,'$DOCKER_PKG',g' | paste -d, -s)" |
|
| 8 |
+} |
|
| 9 |
+ |
|
| 10 |
+# this "grep" hides some really irritating warnings that "go test -coverpkg" |
|
| 11 |
+# spews when it is given packages that aren't used |
|
| 12 |
+exec > >(tee -a $DEST/test.log) 2>&1 |
|
| 13 |
+bundle_test_integration 2>&1 \ |
|
| 14 |
+ | grep --line-buffered -v '^warning: no packages being tested depend on ' |
| 0 | 15 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,46 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+DEST=$1 |
|
| 4 |
+ |
|
| 5 |
+DOCKER_GRAPHDRIVER=${DOCKER_GRAPHDRIVER:-vfs}
|
|
| 6 |
+DOCKER_EXECDRIVER=${DOCKER_EXECDRIVER:-native}
|
|
| 7 |
+ |
|
| 8 |
+bundle_test_integration_cli() {
|
|
| 9 |
+ go_test_dir ./integration-cli |
|
| 10 |
+} |
|
| 11 |
+ |
|
| 12 |
+# subshell so that we can export PATH without breaking other things |
|
| 13 |
+exec > >(tee -a $DEST/test.log) 2>&1 |
|
| 14 |
+( |
|
| 15 |
+ export PATH="$DEST/../binary:$DEST/../dynbinary:$PATH" |
|
| 16 |
+ |
|
| 17 |
+ if ! command -v docker &> /dev/null; then |
|
| 18 |
+ echo >&2 'error: binary or dynbinary must be run before test-integration-cli' |
|
| 19 |
+ false |
|
| 20 |
+ fi |
|
| 21 |
+ |
|
| 22 |
+ # intentionally open a couple bogus file descriptors to help test that they get scrubbed in containers |
|
| 23 |
+ exec 41>&1 42>&2 |
|
| 24 |
+ |
|
| 25 |
+ ( set -x; exec \ |
|
| 26 |
+ docker --daemon --debug \ |
|
| 27 |
+ --storage-driver "$DOCKER_GRAPHDRIVER" \ |
|
| 28 |
+ --exec-driver "$DOCKER_EXECDRIVER" \ |
|
| 29 |
+ --pidfile "$DEST/docker.pid" \ |
|
| 30 |
+ &> "$DEST/docker.log" |
|
| 31 |
+ ) & |
|
| 32 |
+ |
|
| 33 |
+ # pull the busybox image before running the tests |
|
| 34 |
+ sleep 2 |
|
| 35 |
+ |
|
| 36 |
+ source "$(dirname "$BASH_SOURCE")/.ensure-busybox" |
|
| 37 |
+ |
|
| 38 |
+ bundle_test_integration_cli |
|
| 39 |
+ |
|
| 40 |
+ for pid in $(find "$DEST" -name docker.pid); do |
|
| 41 |
+ DOCKER_PID=$(set -x; cat "$pid") |
|
| 42 |
+ ( set -x; kill $DOCKER_PID ) |
|
| 43 |
+ wait $DOCKERD_PID || true |
|
| 44 |
+ done |
|
| 45 |
+) |
| 0 | 46 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,86 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+DEST=$1 |
|
| 4 |
+: ${PARALLEL_JOBS:=$(nproc)}
|
|
| 5 |
+ |
|
| 6 |
+RED=$'\033[31m' |
|
| 7 |
+GREEN=$'\033[32m' |
|
| 8 |
+TEXTRESET=$'\033[0m' # reset the foreground colour |
|
| 9 |
+ |
|
| 10 |
+# Run Docker's test suite, including sub-packages, and store their output as a bundle |
|
| 11 |
+# If $TESTFLAGS is set in the environment, it is passed as extra arguments to 'go test'. |
|
| 12 |
+# You can use this to select certain tests to run, eg. |
|
| 13 |
+# |
|
| 14 |
+# TESTFLAGS='-run ^TestBuild$' ./hack/make.sh test-unit |
|
| 15 |
+# |
|
| 16 |
+bundle_test_unit() {
|
|
| 17 |
+ {
|
|
| 18 |
+ date |
|
| 19 |
+ |
|
| 20 |
+ # Run all the tests if no TESTDIRS were specified. |
|
| 21 |
+ if [ -z "$TESTDIRS" ]; then |
|
| 22 |
+ TESTDIRS=$(find_dirs '*_test.go') |
|
| 23 |
+ fi |
|
| 24 |
+ ( |
|
| 25 |
+ export LDFLAGS="$LDFLAGS $LDFLAGS_STATIC_DOCKER" |
|
| 26 |
+ export TESTFLAGS |
|
| 27 |
+ export HAVE_GO_TEST_COVER |
|
| 28 |
+ export DEST |
|
| 29 |
+ if command -v parallel &> /dev/null; then |
|
| 30 |
+ # accomodate parallel to be able to access variables |
|
| 31 |
+ export SHELL="$BASH" |
|
| 32 |
+ export HOME="$(mktemp -d)" |
|
| 33 |
+ mkdir -p "$HOME/.parallel" |
|
| 34 |
+ touch "$HOME/.parallel/ignored_vars" |
|
| 35 |
+ |
|
| 36 |
+ # some hack to export array variables |
|
| 37 |
+ export BUILDFLAGS_FILE="$HOME/buildflags_file" |
|
| 38 |
+ ( IFS=$'\n'; echo "${BUILDFLAGS[*]}" ) > "$BUILDFLAGS_FILE"
|
|
| 39 |
+ |
|
| 40 |
+ echo "$TESTDIRS" | parallel --jobs "$PARALLEL_JOBS" --halt 2 --env _ "$(dirname "$BASH_SOURCE")/.go-compile-test-dir" |
|
| 41 |
+ rm -rf "$HOME" |
|
| 42 |
+ else |
|
| 43 |
+ # aww, no "parallel" available - fall back to boring |
|
| 44 |
+ for test_dir in $TESTDIRS; do |
|
| 45 |
+ "$(dirname "$BASH_SOURCE")/.go-compile-test-dir" "$test_dir" |
|
| 46 |
+ done |
|
| 47 |
+ fi |
|
| 48 |
+ ) |
|
| 49 |
+ echo "$TESTDIRS" | go_run_test_dir |
|
| 50 |
+ } |
|
| 51 |
+} |
|
| 52 |
+ |
|
| 53 |
+go_run_test_dir() {
|
|
| 54 |
+ TESTS_FAILED=() |
|
| 55 |
+ while read dir; do |
|
| 56 |
+ echo |
|
| 57 |
+ echo '+ go test' $TESTFLAGS "${DOCKER_PKG}${dir#.}"
|
|
| 58 |
+ precompiled="$DEST/precompiled/$dir.test" |
|
| 59 |
+ if ! ( cd "$dir" && "$precompiled" $TESTFLAGS ); then |
|
| 60 |
+ TESTS_FAILED+=("$dir")
|
|
| 61 |
+ echo |
|
| 62 |
+ echo "${RED}Tests failed: $dir${TEXTRESET}"
|
|
| 63 |
+ sleep 1 # give it a second, so observers watching can take note |
|
| 64 |
+ fi |
|
| 65 |
+ done |
|
| 66 |
+ |
|
| 67 |
+ echo |
|
| 68 |
+ echo |
|
| 69 |
+ echo |
|
| 70 |
+ |
|
| 71 |
+ # if some tests fail, we want the bundlescript to fail, but we want to |
|
| 72 |
+ # try running ALL the tests first, hence TESTS_FAILED |
|
| 73 |
+ if [ "${#TESTS_FAILED[@]}" -gt 0 ]; then
|
|
| 74 |
+ echo "${RED}Test failures in: ${TESTS_FAILED[@]}${TEXTRESET}"
|
|
| 75 |
+ echo |
|
| 76 |
+ false |
|
| 77 |
+ else |
|
| 78 |
+ echo "${GREEN}Test success${TEXTRESET}"
|
|
| 79 |
+ echo |
|
| 80 |
+ true |
|
| 81 |
+ fi |
|
| 82 |
+} |
|
| 83 |
+ |
|
| 84 |
+exec > >(tee -a $DEST/test.log) 2>&1 |
|
| 85 |
+bundle_test_unit |
| 0 | 86 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,31 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+ |
|
| 2 |
+DEST="$1" |
|
| 3 |
+CROSS="$DEST/../cross" |
|
| 4 |
+ |
|
| 5 |
+set -e |
|
| 6 |
+ |
|
| 7 |
+if [ ! -d "$CROSS/linux/amd64" ]; then |
|
| 8 |
+ echo >&2 'error: binary and cross must be run before tgz' |
|
| 9 |
+ false |
|
| 10 |
+fi |
|
| 11 |
+ |
|
| 12 |
+for d in "$CROSS/"*/*; do |
|
| 13 |
+ GOARCH="$(basename "$d")" |
|
| 14 |
+ GOOS="$(basename "$(dirname "$d")")" |
|
| 15 |
+ mkdir -p "$DEST/$GOOS/$GOARCH" |
|
| 16 |
+ TGZ="$DEST/$GOOS/$GOARCH/docker-$VERSION.tgz" |
|
| 17 |
+ |
|
| 18 |
+ mkdir -p "$DEST/build" |
|
| 19 |
+ |
|
| 20 |
+ mkdir -p "$DEST/build/usr/local/bin" |
|
| 21 |
+ cp -L "$d/docker-$VERSION" "$DEST/build/usr/local/bin/docker" |
|
| 22 |
+ |
|
| 23 |
+ tar --numeric-owner --owner 0 -C "$DEST/build" -czf "$TGZ" usr |
|
| 24 |
+ |
|
| 25 |
+ hash_files "$TGZ" |
|
| 26 |
+ |
|
| 27 |
+ rm -rf "$DEST/build" |
|
| 28 |
+ |
|
| 29 |
+ echo "Created tgz: $TGZ" |
|
| 30 |
+done |
| 0 | 31 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,176 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+ |
|
| 2 |
+DEST=$1 |
|
| 3 |
+ |
|
| 4 |
+PKGVERSION="$VERSION" |
|
| 5 |
+if [ -n "$(git status --porcelain)" ]; then |
|
| 6 |
+ PKGVERSION="$PKGVERSION-$(date +%Y%m%d%H%M%S)-$GITCOMMIT" |
|
| 7 |
+fi |
|
| 8 |
+ |
|
| 9 |
+PACKAGE_ARCHITECTURE="$(dpkg-architecture -qDEB_HOST_ARCH)" |
|
| 10 |
+PACKAGE_URL="http://www.docker.com/" |
|
| 11 |
+PACKAGE_MAINTAINER="support@docker.com" |
|
| 12 |
+PACKAGE_DESCRIPTION="Linux container runtime |
|
| 13 |
+Docker complements LXC with a high-level API which operates at the process |
|
| 14 |
+level. It runs unix processes with strong guarantees of isolation and |
|
| 15 |
+repeatability across servers. |
|
| 16 |
+Docker is a great building block for automating distributed systems: |
|
| 17 |
+large-scale web deployments, database clusters, continuous deployment systems, |
|
| 18 |
+private PaaS, service-oriented architectures, etc." |
|
| 19 |
+PACKAGE_LICENSE="Apache-2.0" |
|
| 20 |
+ |
|
| 21 |
+# Build docker as an ubuntu package using FPM and REPREPRO (sue me). |
|
| 22 |
+# bundle_binary must be called first. |
|
| 23 |
+bundle_ubuntu() {
|
|
| 24 |
+ DIR=$DEST/build |
|
| 25 |
+ |
|
| 26 |
+ # Include our udev rules |
|
| 27 |
+ mkdir -p $DIR/etc/udev/rules.d |
|
| 28 |
+ cp contrib/udev/80-docker.rules $DIR/etc/udev/rules.d/ |
|
| 29 |
+ |
|
| 30 |
+ # Include our init scripts |
|
| 31 |
+ mkdir -p $DIR/etc/init |
|
| 32 |
+ cp contrib/init/upstart/docker.conf $DIR/etc/init/ |
|
| 33 |
+ mkdir -p $DIR/etc/init.d |
|
| 34 |
+ cp contrib/init/sysvinit-debian/docker $DIR/etc/init.d/ |
|
| 35 |
+ mkdir -p $DIR/etc/default |
|
| 36 |
+ cp contrib/init/sysvinit-debian/docker.default $DIR/etc/default/docker |
|
| 37 |
+ mkdir -p $DIR/lib/systemd/system |
|
| 38 |
+ cp contrib/init/systemd/docker.{service,socket} $DIR/lib/systemd/system/
|
|
| 39 |
+ |
|
| 40 |
+ # Include contributed completions |
|
| 41 |
+ mkdir -p $DIR/etc/bash_completion.d |
|
| 42 |
+ cp contrib/completion/bash/docker $DIR/etc/bash_completion.d/ |
|
| 43 |
+ mkdir -p $DIR/usr/share/zsh/vendor-completions |
|
| 44 |
+ cp contrib/completion/zsh/_docker $DIR/usr/share/zsh/vendor-completions/ |
|
| 45 |
+ mkdir -p $DIR/etc/fish/completions |
|
| 46 |
+ cp contrib/completion/fish/docker.fish $DIR/etc/fish/completions/ |
|
| 47 |
+ |
|
| 48 |
+ # Include contributed man pages |
|
| 49 |
+ docs/man/md2man-all.sh -q |
|
| 50 |
+ manRoot="$DIR/usr/share/man" |
|
| 51 |
+ mkdir -p "$manRoot" |
|
| 52 |
+ for manDir in docs/man/man?; do |
|
| 53 |
+ manBase="$(basename "$manDir")" # "man1" |
|
| 54 |
+ for manFile in "$manDir"/*; do |
|
| 55 |
+ manName="$(basename "$manFile")" # "docker-build.1" |
|
| 56 |
+ mkdir -p "$manRoot/$manBase" |
|
| 57 |
+ gzip -c "$manFile" > "$manRoot/$manBase/$manName.gz" |
|
| 58 |
+ done |
|
| 59 |
+ done |
|
| 60 |
+ |
|
| 61 |
+ # Copy the binary |
|
| 62 |
+ # This will fail if the binary bundle hasn't been built |
|
| 63 |
+ mkdir -p $DIR/usr/bin |
|
| 64 |
+ cp $DEST/../binary/docker-$VERSION $DIR/usr/bin/docker |
|
| 65 |
+ |
|
| 66 |
+ # Generate postinst/prerm/postrm scripts |
|
| 67 |
+ cat > $DEST/postinst <<'EOF' |
|
| 68 |
+#!/bin/sh |
|
| 69 |
+set -e |
|
| 70 |
+set -u |
|
| 71 |
+ |
|
| 72 |
+if [ "$1" = 'configure' ] && [ -z "$2" ]; then |
|
| 73 |
+ if ! getent group docker > /dev/null; then |
|
| 74 |
+ groupadd --system docker |
|
| 75 |
+ fi |
|
| 76 |
+fi |
|
| 77 |
+ |
|
| 78 |
+if ! { [ -x /sbin/initctl ] && /sbin/initctl version 2>/dev/null | grep -q upstart; }; then
|
|
| 79 |
+ # we only need to do this if upstart isn't in charge |
|
| 80 |
+ update-rc.d docker defaults > /dev/null || true |
|
| 81 |
+fi |
|
| 82 |
+if [ -n "$2" ]; then |
|
| 83 |
+ _dh_action=restart |
|
| 84 |
+else |
|
| 85 |
+ _dh_action=start |
|
| 86 |
+fi |
|
| 87 |
+service docker $_dh_action 2>/dev/null || true |
|
| 88 |
+ |
|
| 89 |
+#DEBHELPER# |
|
| 90 |
+EOF |
|
| 91 |
+ cat > $DEST/prerm <<'EOF' |
|
| 92 |
+#!/bin/sh |
|
| 93 |
+set -e |
|
| 94 |
+set -u |
|
| 95 |
+ |
|
| 96 |
+service docker stop 2>/dev/null || true |
|
| 97 |
+ |
|
| 98 |
+#DEBHELPER# |
|
| 99 |
+EOF |
|
| 100 |
+ cat > $DEST/postrm <<'EOF' |
|
| 101 |
+#!/bin/sh |
|
| 102 |
+set -e |
|
| 103 |
+set -u |
|
| 104 |
+ |
|
| 105 |
+if [ "$1" = "purge" ] ; then |
|
| 106 |
+ update-rc.d docker remove > /dev/null || true |
|
| 107 |
+fi |
|
| 108 |
+ |
|
| 109 |
+# In case this system is running systemd, we make systemd reload the unit files |
|
| 110 |
+# to pick up changes. |
|
| 111 |
+if [ -d /run/systemd/system ] ; then |
|
| 112 |
+ systemctl --system daemon-reload > /dev/null || true |
|
| 113 |
+fi |
|
| 114 |
+ |
|
| 115 |
+#DEBHELPER# |
|
| 116 |
+EOF |
|
| 117 |
+ # TODO swaths of these were borrowed from debhelper's auto-inserted stuff, because we're still using fpm - we need to use debhelper instead, and somehow reconcile Ubuntu that way |
|
| 118 |
+ chmod +x $DEST/postinst $DEST/prerm $DEST/postrm |
|
| 119 |
+ |
|
| 120 |
+ ( |
|
| 121 |
+ # switch directories so we create *.deb in the right folder |
|
| 122 |
+ cd $DEST |
|
| 123 |
+ |
|
| 124 |
+ # create lxc-docker-VERSION package |
|
| 125 |
+ fpm -s dir -C $DIR \ |
|
| 126 |
+ --name lxc-docker-$VERSION --version $PKGVERSION \ |
|
| 127 |
+ --after-install $DEST/postinst \ |
|
| 128 |
+ --before-remove $DEST/prerm \ |
|
| 129 |
+ --after-remove $DEST/postrm \ |
|
| 130 |
+ --architecture "$PACKAGE_ARCHITECTURE" \ |
|
| 131 |
+ --prefix / \ |
|
| 132 |
+ --depends iptables \ |
|
| 133 |
+ --deb-recommends aufs-tools \ |
|
| 134 |
+ --deb-recommends ca-certificates \ |
|
| 135 |
+ --deb-recommends git \ |
|
| 136 |
+ --deb-recommends xz-utils \ |
|
| 137 |
+ --deb-recommends 'cgroupfs-mount | cgroup-lite' \ |
|
| 138 |
+ --description "$PACKAGE_DESCRIPTION" \ |
|
| 139 |
+ --maintainer "$PACKAGE_MAINTAINER" \ |
|
| 140 |
+ --conflicts docker \ |
|
| 141 |
+ --conflicts docker.io \ |
|
| 142 |
+ --conflicts lxc-docker-virtual-package \ |
|
| 143 |
+ --provides lxc-docker \ |
|
| 144 |
+ --provides lxc-docker-virtual-package \ |
|
| 145 |
+ --replaces lxc-docker \ |
|
| 146 |
+ --replaces lxc-docker-virtual-package \ |
|
| 147 |
+ --url "$PACKAGE_URL" \ |
|
| 148 |
+ --license "$PACKAGE_LICENSE" \ |
|
| 149 |
+ --config-files /etc/udev/rules.d/80-docker.rules \ |
|
| 150 |
+ --config-files /etc/init/docker.conf \ |
|
| 151 |
+ --config-files /etc/init.d/docker \ |
|
| 152 |
+ --config-files /etc/default/docker \ |
|
| 153 |
+ --deb-compression gz \ |
|
| 154 |
+ -t deb . |
|
| 155 |
+ # TODO replace "Suggests: cgroup-lite" with "Recommends: cgroupfs-mount | cgroup-lite" once cgroupfs-mount is available |
|
| 156 |
+ |
|
| 157 |
+ # create empty lxc-docker wrapper package |
|
| 158 |
+ fpm -s empty \ |
|
| 159 |
+ --name lxc-docker --version $PKGVERSION \ |
|
| 160 |
+ --architecture "$PACKAGE_ARCHITECTURE" \ |
|
| 161 |
+ --depends lxc-docker-$VERSION \ |
|
| 162 |
+ --description "$PACKAGE_DESCRIPTION" \ |
|
| 163 |
+ --maintainer "$PACKAGE_MAINTAINER" \ |
|
| 164 |
+ --url "$PACKAGE_URL" \ |
|
| 165 |
+ --license "$PACKAGE_LICENSE" \ |
|
| 166 |
+ --deb-compression gz \ |
|
| 167 |
+ -t deb |
|
| 168 |
+ ) |
|
| 169 |
+ |
|
| 170 |
+ # clean up after ourselves so we have a clean output directory |
|
| 171 |
+ rm $DEST/postinst $DEST/prerm $DEST/postrm |
|
| 172 |
+ rm -r $DIR |
|
| 173 |
+} |
|
| 174 |
+ |
|
| 175 |
+bundle_ubuntu |
| 0 | 176 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,56 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+ |
|
| 2 |
+source "$(dirname "$BASH_SOURCE")/.validate" |
|
| 3 |
+ |
|
| 4 |
+adds=$(validate_diff --numstat | awk '{ s += $1 } END { print s }')
|
|
| 5 |
+dels=$(validate_diff --numstat | awk '{ s += $2 } END { print s }')
|
|
| 6 |
+notDocs="$(validate_diff --numstat | awk '$3 !~ /^docs\// { print $3 }')"
|
|
| 7 |
+ |
|
| 8 |
+: ${adds:=0}
|
|
| 9 |
+: ${dels:=0}
|
|
| 10 |
+ |
|
| 11 |
+# "Username may only contain alphanumeric characters or dashes and cannot begin with a dash" |
|
| 12 |
+githubUsernameRegex='[a-zA-Z0-9][a-zA-Z0-9-]+' |
|
| 13 |
+ |
|
| 14 |
+# https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work |
|
| 15 |
+dcoPrefix='Signed-off-by:' |
|
| 16 |
+dcoRegex="^(Docker-DCO-1.1-)?$dcoPrefix ([^<]+) <([^<>@]+@[^<>]+)>( \\(github: ($githubUsernameRegex)\\))?$" |
|
| 17 |
+ |
|
| 18 |
+check_dco() {
|
|
| 19 |
+ grep -qE "$dcoRegex" |
|
| 20 |
+} |
|
| 21 |
+ |
|
| 22 |
+if [ $adds -eq 0 -a $dels -eq 0 ]; then |
|
| 23 |
+ echo '0 adds, 0 deletions; nothing to validate! :)' |
|
| 24 |
+elif [ -z "$notDocs" -a $adds -le 1 -a $dels -le 1 ]; then |
|
| 25 |
+ echo 'Congratulations! DCO small-patch-exception material!' |
|
| 26 |
+else |
|
| 27 |
+ commits=( $(validate_log --format='format:%H%n') ) |
|
| 28 |
+ badCommits=() |
|
| 29 |
+ for commit in "${commits[@]}"; do
|
|
| 30 |
+ if [ -z "$(git log -1 --format='format:' --name-status "$commit")" ]; then |
|
| 31 |
+ # no content (ie, Merge commit, etc) |
|
| 32 |
+ continue |
|
| 33 |
+ fi |
|
| 34 |
+ if ! git log -1 --format='format:%B' "$commit" | check_dco; then |
|
| 35 |
+ badCommits+=( "$commit" ) |
|
| 36 |
+ fi |
|
| 37 |
+ done |
|
| 38 |
+ if [ ${#badCommits[@]} -eq 0 ]; then
|
|
| 39 |
+ echo "Congratulations! All commits are properly signed with the DCO!" |
|
| 40 |
+ else |
|
| 41 |
+ {
|
|
| 42 |
+ echo "These commits do not have a proper '$dcoPrefix' marker:" |
|
| 43 |
+ for commit in "${badCommits[@]}"; do
|
|
| 44 |
+ echo " - $commit" |
|
| 45 |
+ done |
|
| 46 |
+ echo |
|
| 47 |
+ echo 'Please amend each commit to include a properly formatted DCO marker.' |
|
| 48 |
+ echo |
|
| 49 |
+ echo 'Visit the following URL for information about the Docker DCO:' |
|
| 50 |
+ echo ' https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work' |
|
| 51 |
+ echo |
|
| 52 |
+ } >&2 |
|
| 53 |
+ false |
|
| 54 |
+ fi |
|
| 55 |
+fi |
| 0 | 56 |
new file mode 100644 |
| ... | ... |
@@ -0,0 +1,30 @@ |
| 0 |
+#!/bin/bash |
|
| 1 |
+ |
|
| 2 |
+source "$(dirname "$BASH_SOURCE")/.validate" |
|
| 3 |
+ |
|
| 4 |
+IFS=$'\n' |
|
| 5 |
+files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/' || true) ) |
|
| 6 |
+unset IFS |
|
| 7 |
+ |
|
| 8 |
+badFiles=() |
|
| 9 |
+for f in "${files[@]}"; do
|
|
| 10 |
+ # we use "git show" here to validate that what's committed is formatted |
|
| 11 |
+ if [ "$(git show "$VALIDATE_HEAD:$f" | gofmt -s -l)" ]; then |
|
| 12 |
+ badFiles+=( "$f" ) |
|
| 13 |
+ fi |
|
| 14 |
+done |
|
| 15 |
+ |
|
| 16 |
+if [ ${#badFiles[@]} -eq 0 ]; then
|
|
| 17 |
+ echo 'Congratulations! All Go source files are properly formatted.' |
|
| 18 |
+else |
|
| 19 |
+ {
|
|
| 20 |
+ echo "These files are not properly gofmt'd:" |
|
| 21 |
+ for f in "${badFiles[@]}"; do
|
|
| 22 |
+ echo " - $f" |
|
| 23 |
+ done |
|
| 24 |
+ echo |
|
| 25 |
+ echo 'Please reformat the above files using "gofmt -s -w" and commit the result.' |
|
| 26 |
+ echo |
|
| 27 |
+ } >&2 |
|
| 28 |
+ false |
|
| 29 |
+fi |
| 0 | 30 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,389 @@ |
| 0 |
+#!/usr/bin/env bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+# This script looks for bundles built by make.sh, and releases them on a |
|
| 4 |
+# public S3 bucket. |
|
| 5 |
+# |
|
| 6 |
+# Bundles should be available for the VERSION string passed as argument. |
|
| 7 |
+# |
|
| 8 |
+# The correct way to call this script is inside a container built by the |
|
| 9 |
+# official Dockerfile at the root of the Docker source code. The Dockerfile, |
|
| 10 |
+# make.sh and release.sh should all be from the same source code revision. |
|
| 11 |
+ |
|
| 12 |
+set -o pipefail |
|
| 13 |
+ |
|
| 14 |
+# Print a usage message and exit. |
|
| 15 |
+usage() {
|
|
| 16 |
+ cat >&2 <<'EOF' |
|
| 17 |
+To run, I need: |
|
| 18 |
+- to be in a container generated by the Dockerfile at the top of the Docker |
|
| 19 |
+ repository; |
|
| 20 |
+- to be provided with the name of an S3 bucket, in environment variable |
|
| 21 |
+ AWS_S3_BUCKET; |
|
| 22 |
+- to be provided with AWS credentials for this S3 bucket, in environment |
|
| 23 |
+ variables AWS_ACCESS_KEY and AWS_SECRET_KEY; |
|
| 24 |
+- the passphrase to unlock the GPG key which will sign the deb packages |
|
| 25 |
+ (passed as environment variable GPG_PASSPHRASE); |
|
| 26 |
+- a generous amount of good will and nice manners. |
|
| 27 |
+The canonical way to run me is to run the image produced by the Dockerfile: e.g.:" |
|
| 28 |
+ |
|
| 29 |
+docker run -e AWS_S3_BUCKET=test.docker.com \ |
|
| 30 |
+ -e AWS_ACCESS_KEY=... \ |
|
| 31 |
+ -e AWS_SECRET_KEY=... \ |
|
| 32 |
+ -e GPG_PASSPHRASE=... \ |
|
| 33 |
+ -i -t --privileged \ |
|
| 34 |
+ docker ./hack/release.sh |
|
| 35 |
+EOF |
|
| 36 |
+ exit 1 |
|
| 37 |
+} |
|
| 38 |
+ |
|
| 39 |
+[ "$AWS_S3_BUCKET" ] || usage |
|
| 40 |
+[ "$AWS_ACCESS_KEY" ] || usage |
|
| 41 |
+[ "$AWS_SECRET_KEY" ] || usage |
|
| 42 |
+[ "$GPG_PASSPHRASE" ] || usage |
|
| 43 |
+[ -d /go/src/github.com/docker/docker ] || usage |
|
| 44 |
+cd /go/src/github.com/docker/docker |
|
| 45 |
+[ -x hack/make.sh ] || usage |
|
| 46 |
+ |
|
| 47 |
+RELEASE_BUNDLES=( |
|
| 48 |
+ binary |
|
| 49 |
+ cross |
|
| 50 |
+ tgz |
|
| 51 |
+ ubuntu |
|
| 52 |
+) |
|
| 53 |
+ |
|
| 54 |
+if [ "$1" != '--release-regardless-of-test-failure' ]; then |
|
| 55 |
+ RELEASE_BUNDLES=( |
|
| 56 |
+ test-unit test-integration |
|
| 57 |
+ "${RELEASE_BUNDLES[@]}"
|
|
| 58 |
+ test-integration-cli |
|
| 59 |
+ ) |
|
| 60 |
+fi |
|
| 61 |
+ |
|
| 62 |
+VERSION=$(cat VERSION) |
|
| 63 |
+BUCKET=$AWS_S3_BUCKET |
|
| 64 |
+ |
|
| 65 |
+# These are the 2 keys we've used to sign the deb's |
|
| 66 |
+# release (get.docker.com) |
|
| 67 |
+# GPG_KEY="36A1D7869245C8950F966E92D8576A8BA88D21E9" |
|
| 68 |
+# test (test.docker.com) |
|
| 69 |
+# GPG_KEY="740B314AE3941731B942C66ADF4FD13717AAD7D6" |
|
| 70 |
+ |
|
| 71 |
+setup_s3() {
|
|
| 72 |
+ # Try creating the bucket. Ignore errors (it might already exist). |
|
| 73 |
+ s3cmd mb s3://$BUCKET 2>/dev/null || true |
|
| 74 |
+ # Check access to the bucket. |
|
| 75 |
+ # s3cmd has no useful exit status, so we cannot check that. |
|
| 76 |
+ # Instead, we check if it outputs anything on standard output. |
|
| 77 |
+ # (When there are problems, it uses standard error instead.) |
|
| 78 |
+ s3cmd info s3://$BUCKET | grep -q . |
|
| 79 |
+ # Make the bucket accessible through website endpoints. |
|
| 80 |
+ s3cmd ws-create --ws-index index --ws-error error s3://$BUCKET |
|
| 81 |
+} |
|
| 82 |
+ |
|
| 83 |
+# write_to_s3 uploads the contents of standard input to the specified S3 url. |
|
| 84 |
+write_to_s3() {
|
|
| 85 |
+ DEST=$1 |
|
| 86 |
+ F=`mktemp` |
|
| 87 |
+ cat > $F |
|
| 88 |
+ s3cmd --acl-public --mime-type='text/plain' put $F $DEST |
|
| 89 |
+ rm -f $F |
|
| 90 |
+} |
|
| 91 |
+ |
|
| 92 |
+s3_url() {
|
|
| 93 |
+ case "$BUCKET" in |
|
| 94 |
+ get.docker.com|test.docker.com) |
|
| 95 |
+ echo "https://$BUCKET" |
|
| 96 |
+ ;; |
|
| 97 |
+ *) |
|
| 98 |
+ s3cmd ws-info s3://$BUCKET | awk -v 'FS=: +' '/http:\/\/'$BUCKET'/ { gsub(/\/+$/, "", $2); print $2 }'
|
|
| 99 |
+ ;; |
|
| 100 |
+ esac |
|
| 101 |
+} |
|
| 102 |
+ |
|
| 103 |
+build_all() {
|
|
| 104 |
+ if ! ./hack/make.sh "${RELEASE_BUNDLES[@]}"; then
|
|
| 105 |
+ echo >&2 |
|
| 106 |
+ echo >&2 'The build or tests appear to have failed.' |
|
| 107 |
+ echo >&2 |
|
| 108 |
+ echo >&2 'You, as the release maintainer, now have a couple options:' |
|
| 109 |
+ echo >&2 '- delay release and fix issues' |
|
| 110 |
+ echo >&2 '- delay release and fix issues' |
|
| 111 |
+ echo >&2 '- did we mention how important this is? issues need fixing :)' |
|
| 112 |
+ echo >&2 |
|
| 113 |
+ echo >&2 'As a final LAST RESORT, you (because only you, the release maintainer,' |
|
| 114 |
+ echo >&2 ' really knows all the hairy problems at hand with the current release' |
|
| 115 |
+ echo >&2 ' issues) may bypass this checking by running this script again with the' |
|
| 116 |
+ echo >&2 ' single argument of "--release-regardless-of-test-failure", which will skip' |
|
| 117 |
+ echo >&2 ' running the test suite, and will only build the binaries and packages. Please' |
|
| 118 |
+ echo >&2 ' avoid using this if at all possible.' |
|
| 119 |
+ echo >&2 |
|
| 120 |
+ echo >&2 'Regardless, we cannot stress enough the scarcity with which this bypass' |
|
| 121 |
+ echo >&2 ' should be used. If there are release issues, we should always err on the' |
|
| 122 |
+ echo >&2 ' side of caution.' |
|
| 123 |
+ echo >&2 |
|
| 124 |
+ exit 1 |
|
| 125 |
+ fi |
|
| 126 |
+} |
|
| 127 |
+ |
|
| 128 |
+upload_release_build() {
|
|
| 129 |
+ src="$1" |
|
| 130 |
+ dst="$2" |
|
| 131 |
+ latest="$3" |
|
| 132 |
+ |
|
| 133 |
+ echo |
|
| 134 |
+ echo "Uploading $src" |
|
| 135 |
+ echo " to $dst" |
|
| 136 |
+ echo |
|
| 137 |
+ s3cmd --follow-symlinks --preserve --acl-public put "$src" "$dst" |
|
| 138 |
+ if [ "$latest" ]; then |
|
| 139 |
+ echo |
|
| 140 |
+ echo "Copying to $latest" |
|
| 141 |
+ echo |
|
| 142 |
+ s3cmd --acl-public cp "$dst" "$latest" |
|
| 143 |
+ fi |
|
| 144 |
+ |
|
| 145 |
+ # get hash files too (see hash_files() in hack/make.sh) |
|
| 146 |
+ for hashAlgo in md5 sha256; do |
|
| 147 |
+ if [ -e "$src.$hashAlgo" ]; then |
|
| 148 |
+ echo |
|
| 149 |
+ echo "Uploading $src.$hashAlgo" |
|
| 150 |
+ echo " to $dst.$hashAlgo" |
|
| 151 |
+ echo |
|
| 152 |
+ s3cmd --follow-symlinks --preserve --acl-public --mime-type='text/plain' put "$src.$hashAlgo" "$dst.$hashAlgo" |
|
| 153 |
+ if [ "$latest" ]; then |
|
| 154 |
+ echo |
|
| 155 |
+ echo "Copying to $latest.$hashAlgo" |
|
| 156 |
+ echo |
|
| 157 |
+ s3cmd --acl-public cp "$dst.$hashAlgo" "$latest.$hashAlgo" |
|
| 158 |
+ fi |
|
| 159 |
+ fi |
|
| 160 |
+ done |
|
| 161 |
+} |
|
| 162 |
+ |
|
| 163 |
+release_build() {
|
|
| 164 |
+ GOOS=$1 |
|
| 165 |
+ GOARCH=$2 |
|
| 166 |
+ |
|
| 167 |
+ binDir=bundles/$VERSION/cross/$GOOS/$GOARCH |
|
| 168 |
+ tgzDir=bundles/$VERSION/tgz/$GOOS/$GOARCH |
|
| 169 |
+ binary=docker-$VERSION |
|
| 170 |
+ tgz=docker-$VERSION.tgz |
|
| 171 |
+ |
|
| 172 |
+ latestBase= |
|
| 173 |
+ if [ -z "$NOLATEST" ]; then |
|
| 174 |
+ latestBase=docker-latest |
|
| 175 |
+ fi |
|
| 176 |
+ |
|
| 177 |
+ # we need to map our GOOS and GOARCH to uname values |
|
| 178 |
+ # see https://en.wikipedia.org/wiki/Uname |
|
| 179 |
+ # ie, GOOS=linux -> "uname -s"=Linux |
|
| 180 |
+ |
|
| 181 |
+ s3Os=$GOOS |
|
| 182 |
+ case "$s3Os" in |
|
| 183 |
+ darwin) |
|
| 184 |
+ s3Os=Darwin |
|
| 185 |
+ ;; |
|
| 186 |
+ freebsd) |
|
| 187 |
+ s3Os=FreeBSD |
|
| 188 |
+ ;; |
|
| 189 |
+ linux) |
|
| 190 |
+ s3Os=Linux |
|
| 191 |
+ ;; |
|
| 192 |
+ *) |
|
| 193 |
+ echo >&2 "error: can't convert $s3Os to an appropriate value for 'uname -s'" |
|
| 194 |
+ exit 1 |
|
| 195 |
+ ;; |
|
| 196 |
+ esac |
|
| 197 |
+ |
|
| 198 |
+ s3Arch=$GOARCH |
|
| 199 |
+ case "$s3Arch" in |
|
| 200 |
+ amd64) |
|
| 201 |
+ s3Arch=x86_64 |
|
| 202 |
+ ;; |
|
| 203 |
+ 386) |
|
| 204 |
+ s3Arch=i386 |
|
| 205 |
+ ;; |
|
| 206 |
+ arm) |
|
| 207 |
+ s3Arch=armel |
|
| 208 |
+ # someday, we might potentially support mutliple GOARM values, in which case we might get armhf here too |
|
| 209 |
+ ;; |
|
| 210 |
+ *) |
|
| 211 |
+ echo >&2 "error: can't convert $s3Arch to an appropriate value for 'uname -m'" |
|
| 212 |
+ exit 1 |
|
| 213 |
+ ;; |
|
| 214 |
+ esac |
|
| 215 |
+ |
|
| 216 |
+ s3Dir=s3://$BUCKET/builds/$s3Os/$s3Arch |
|
| 217 |
+ latest= |
|
| 218 |
+ latestTgz= |
|
| 219 |
+ if [ "$latestBase" ]; then |
|
| 220 |
+ latest="$s3Dir/$latestBase" |
|
| 221 |
+ latestTgz="$s3Dir/$latestBase.tgz" |
|
| 222 |
+ fi |
|
| 223 |
+ |
|
| 224 |
+ if [ ! -x "$binDir/$binary" ]; then |
|
| 225 |
+ echo >&2 "error: can't find $binDir/$binary - was it compiled properly?" |
|
| 226 |
+ exit 1 |
|
| 227 |
+ fi |
|
| 228 |
+ if [ ! -f "$tgzDir/$tgz" ]; then |
|
| 229 |
+ echo >&2 "error: can't find $tgzDir/$tgz - was it packaged properly?" |
|
| 230 |
+ exit 1 |
|
| 231 |
+ fi |
|
| 232 |
+ |
|
| 233 |
+ upload_release_build "$binDir/$binary" "$s3Dir/$binary" "$latest" |
|
| 234 |
+ upload_release_build "$tgzDir/$tgz" "$s3Dir/$tgz" "$latestTgz" |
|
| 235 |
+} |
|
| 236 |
+ |
|
| 237 |
+# Upload the 'ubuntu' bundle to S3: |
|
| 238 |
+# 1. A full APT repository is published at $BUCKET/ubuntu/ |
|
| 239 |
+# 2. Instructions for using the APT repository are uploaded at $BUCKET/ubuntu/index |
|
| 240 |
+release_ubuntu() {
|
|
| 241 |
+ [ -e bundles/$VERSION/ubuntu ] || {
|
|
| 242 |
+ echo >&2 './hack/make.sh must be run before release_ubuntu' |
|
| 243 |
+ exit 1 |
|
| 244 |
+ } |
|
| 245 |
+ |
|
| 246 |
+ # Sign our packages |
|
| 247 |
+ dpkg-sig -g "--passphrase $GPG_PASSPHRASE" -k releasedocker \ |
|
| 248 |
+ --sign builder bundles/$VERSION/ubuntu/*.deb |
|
| 249 |
+ |
|
| 250 |
+ # Setup the APT repo |
|
| 251 |
+ APTDIR=bundles/$VERSION/ubuntu/apt |
|
| 252 |
+ mkdir -p $APTDIR/conf $APTDIR/db |
|
| 253 |
+ s3cmd sync s3://$BUCKET/ubuntu/db/ $APTDIR/db/ || true |
|
| 254 |
+ cat > $APTDIR/conf/distributions <<EOF |
|
| 255 |
+Codename: docker |
|
| 256 |
+Components: main |
|
| 257 |
+Architectures: amd64 i386 |
|
| 258 |
+EOF |
|
| 259 |
+ |
|
| 260 |
+ # Add the DEB package to the APT repo |
|
| 261 |
+ DEBFILE=bundles/$VERSION/ubuntu/lxc-docker*.deb |
|
| 262 |
+ reprepro -b $APTDIR includedeb docker $DEBFILE |
|
| 263 |
+ |
|
| 264 |
+ # Sign |
|
| 265 |
+ for F in $(find $APTDIR -name Release); do |
|
| 266 |
+ gpg -u releasedocker --passphrase $GPG_PASSPHRASE \ |
|
| 267 |
+ --armor --sign --detach-sign \ |
|
| 268 |
+ --output $F.gpg $F |
|
| 269 |
+ done |
|
| 270 |
+ |
|
| 271 |
+ # Upload keys |
|
| 272 |
+ s3cmd sync $HOME/.gnupg/ s3://$BUCKET/ubuntu/.gnupg/ |
|
| 273 |
+ gpg --armor --export releasedocker > bundles/$VERSION/ubuntu/gpg |
|
| 274 |
+ s3cmd --acl-public put bundles/$VERSION/ubuntu/gpg s3://$BUCKET/gpg |
|
| 275 |
+ |
|
| 276 |
+ local gpgFingerprint=36A1D7869245C8950F966E92D8576A8BA88D21E9 |
|
| 277 |
+ if [[ $BUCKET == test* ]]; then |
|
| 278 |
+ gpgFingerprint=740B314AE3941731B942C66ADF4FD13717AAD7D6 |
|
| 279 |
+ fi |
|
| 280 |
+ |
|
| 281 |
+ # Upload repo |
|
| 282 |
+ s3cmd --acl-public sync $APTDIR/ s3://$BUCKET/ubuntu/ |
|
| 283 |
+ cat <<EOF | write_to_s3 s3://$BUCKET/ubuntu/index |
|
| 284 |
+# Check that HTTPS transport is available to APT |
|
| 285 |
+if [ ! -e /usr/lib/apt/methods/https ]; then |
|
| 286 |
+ apt-get update |
|
| 287 |
+ apt-get install -y apt-transport-https |
|
| 288 |
+fi |
|
| 289 |
+ |
|
| 290 |
+# Add the repository to your APT sources |
|
| 291 |
+echo deb $(s3_url)/ubuntu docker main > /etc/apt/sources.list.d/docker.list |
|
| 292 |
+ |
|
| 293 |
+# Then import the repository key |
|
| 294 |
+apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys $gpgFingerprint |
|
| 295 |
+ |
|
| 296 |
+# Install docker |
|
| 297 |
+apt-get update |
|
| 298 |
+apt-get install -y lxc-docker |
|
| 299 |
+ |
|
| 300 |
+# |
|
| 301 |
+# Alternatively, just use the curl-able install.sh script provided at $(s3_url) |
|
| 302 |
+# |
|
| 303 |
+EOF |
|
| 304 |
+ |
|
| 305 |
+ # Add redirect at /ubuntu/info for URL-backwards-compatibility |
|
| 306 |
+ rm -rf /tmp/emptyfile && touch /tmp/emptyfile |
|
| 307 |
+ s3cmd --acl-public --add-header='x-amz-website-redirect-location:/ubuntu/' --mime-type='text/plain' put /tmp/emptyfile s3://$BUCKET/ubuntu/info |
|
| 308 |
+ |
|
| 309 |
+ echo "APT repository uploaded. Instructions available at $(s3_url)/ubuntu" |
|
| 310 |
+} |
|
| 311 |
+ |
|
| 312 |
+# Upload binaries and tgz files to S3 |
|
| 313 |
+release_binaries() {
|
|
| 314 |
+ [ -e bundles/$VERSION/cross/linux/amd64/docker-$VERSION ] || {
|
|
| 315 |
+ echo >&2 './hack/make.sh must be run before release_binaries' |
|
| 316 |
+ exit 1 |
|
| 317 |
+ } |
|
| 318 |
+ |
|
| 319 |
+ for d in bundles/$VERSION/cross/*/*; do |
|
| 320 |
+ GOARCH="$(basename "$d")" |
|
| 321 |
+ GOOS="$(basename "$(dirname "$d")")" |
|
| 322 |
+ release_build "$GOOS" "$GOARCH" |
|
| 323 |
+ done |
|
| 324 |
+ |
|
| 325 |
+ # TODO create redirect from builds/*/i686 to builds/*/i386 |
|
| 326 |
+ |
|
| 327 |
+ cat <<EOF | write_to_s3 s3://$BUCKET/builds/index |
|
| 328 |
+# To install, run the following command as root: |
|
| 329 |
+curl -sSL -O $(s3_url)/builds/Linux/x86_64/docker-$VERSION && chmod +x docker-$VERSION && sudo mv docker-$VERSION /usr/local/bin/docker |
|
| 330 |
+# Then start docker in daemon mode: |
|
| 331 |
+sudo /usr/local/bin/docker -d |
|
| 332 |
+EOF |
|
| 333 |
+ |
|
| 334 |
+ # Add redirect at /builds/info for URL-backwards-compatibility |
|
| 335 |
+ rm -rf /tmp/emptyfile && touch /tmp/emptyfile |
|
| 336 |
+ s3cmd --acl-public --add-header='x-amz-website-redirect-location:/builds/' --mime-type='text/plain' put /tmp/emptyfile s3://$BUCKET/builds/info |
|
| 337 |
+ |
|
| 338 |
+ if [ -z "$NOLATEST" ]; then |
|
| 339 |
+ echo "Advertising $VERSION on $BUCKET as most recent version" |
|
| 340 |
+ echo $VERSION | write_to_s3 s3://$BUCKET/latest |
|
| 341 |
+ fi |
|
| 342 |
+} |
|
| 343 |
+ |
|
| 344 |
+# Upload the index script |
|
| 345 |
+release_index() {
|
|
| 346 |
+ sed "s,url='https://get.docker.com/',url='$(s3_url)/'," hack/install.sh | write_to_s3 s3://$BUCKET/index |
|
| 347 |
+} |
|
| 348 |
+ |
|
| 349 |
+release_test() {
|
|
| 350 |
+ if [ -e "bundles/$VERSION/test" ]; then |
|
| 351 |
+ s3cmd --acl-public sync bundles/$VERSION/test/ s3://$BUCKET/test/ |
|
| 352 |
+ fi |
|
| 353 |
+} |
|
| 354 |
+ |
|
| 355 |
+setup_gpg() {
|
|
| 356 |
+ # Make sure that we have our keys |
|
| 357 |
+ mkdir -p $HOME/.gnupg/ |
|
| 358 |
+ s3cmd sync s3://$BUCKET/ubuntu/.gnupg/ $HOME/.gnupg/ || true |
|
| 359 |
+ gpg --list-keys releasedocker >/dev/null || {
|
|
| 360 |
+ gpg --gen-key --batch <<EOF |
|
| 361 |
+Key-Type: RSA |
|
| 362 |
+Key-Length: 4096 |
|
| 363 |
+Passphrase: $GPG_PASSPHRASE |
|
| 364 |
+Name-Real: Docker Release Tool |
|
| 365 |
+Name-Email: docker@docker.com |
|
| 366 |
+Name-Comment: releasedocker |
|
| 367 |
+Expire-Date: 0 |
|
| 368 |
+%commit |
|
| 369 |
+EOF |
|
| 370 |
+ } |
|
| 371 |
+} |
|
| 372 |
+ |
|
| 373 |
+main() {
|
|
| 374 |
+ build_all |
|
| 375 |
+ setup_s3 |
|
| 376 |
+ setup_gpg |
|
| 377 |
+ release_binaries |
|
| 378 |
+ release_ubuntu |
|
| 379 |
+ release_index |
|
| 380 |
+ release_test |
|
| 381 |
+} |
|
| 382 |
+ |
|
| 383 |
+main |
|
| 384 |
+ |
|
| 385 |
+echo |
|
| 386 |
+echo |
|
| 387 |
+echo "Release complete; see $(s3_url)" |
|
| 388 |
+echo |
| 0 | 389 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,22 @@ |
| 0 |
+#!/usr/bin/env bash |
|
| 1 |
+ |
|
| 2 |
+## Run this script from the root of the docker repository |
|
| 3 |
+## to query project stats useful to the maintainers. |
|
| 4 |
+## You will need to install `pulls` and `issues` from |
|
| 5 |
+## http://github.com/crosbymichael/pulls |
|
| 6 |
+ |
|
| 7 |
+set -e |
|
| 8 |
+ |
|
| 9 |
+echo -n "Open pulls: " |
|
| 10 |
+PULLS=$(pulls | wc -l); let PULLS=$PULLS-1 |
|
| 11 |
+echo $PULLS |
|
| 12 |
+ |
|
| 13 |
+echo -n "Pulls alru: " |
|
| 14 |
+pulls alru |
|
| 15 |
+ |
|
| 16 |
+echo -n "Open issues: " |
|
| 17 |
+ISSUES=$(issues list | wc -l); let ISSUES=$ISSUES-1 |
|
| 18 |
+echo $ISSUES |
|
| 19 |
+ |
|
| 20 |
+echo -n "Issues alru: " |
|
| 21 |
+issues alru |
| 0 | 22 |
new file mode 100755 |
| ... | ... |
@@ -0,0 +1,73 @@ |
| 0 |
+#!/usr/bin/env bash |
|
| 1 |
+set -e |
|
| 2 |
+ |
|
| 3 |
+cd "$(dirname "$BASH_SOURCE")/.." |
|
| 4 |
+ |
|
| 5 |
+# Downloads dependencies into vendor/ directory |
|
| 6 |
+mkdir -p vendor |
|
| 7 |
+cd vendor |
|
| 8 |
+ |
|
| 9 |
+clone() {
|
|
| 10 |
+ vcs=$1 |
|
| 11 |
+ pkg=$2 |
|
| 12 |
+ rev=$3 |
|
| 13 |
+ |
|
| 14 |
+ pkg_url=https://$pkg |
|
| 15 |
+ target_dir=src/$pkg |
|
| 16 |
+ |
|
| 17 |
+ echo -n "$pkg @ $rev: " |
|
| 18 |
+ |
|
| 19 |
+ if [ -d $target_dir ]; then |
|
| 20 |
+ echo -n 'rm old, ' |
|
| 21 |
+ rm -fr $target_dir |
|
| 22 |
+ fi |
|
| 23 |
+ |
|
| 24 |
+ echo -n 'clone, ' |
|
| 25 |
+ case $vcs in |
|
| 26 |
+ git) |
|
| 27 |
+ git clone --quiet --no-checkout $pkg_url $target_dir |
|
| 28 |
+ ( cd $target_dir && git reset --quiet --hard $rev ) |
|
| 29 |
+ ;; |
|
| 30 |
+ hg) |
|
| 31 |
+ hg clone --quiet --updaterev $rev $pkg_url $target_dir |
|
| 32 |
+ ;; |
|
| 33 |
+ esac |
|
| 34 |
+ |
|
| 35 |
+ echo -n 'rm VCS, ' |
|
| 36 |
+ ( cd $target_dir && rm -rf .{git,hg} )
|
|
| 37 |
+ |
|
| 38 |
+ echo done |
|
| 39 |
+} |
|
| 40 |
+ |
|
| 41 |
+clone git github.com/kr/pty 67e2db24c8 |
|
| 42 |
+ |
|
| 43 |
+clone git github.com/gorilla/context 14f550f51a |
|
| 44 |
+ |
|
| 45 |
+clone git github.com/gorilla/mux 136d54f81f |
|
| 46 |
+ |
|
| 47 |
+clone git github.com/tchap/go-patricia v1.0.1 |
|
| 48 |
+ |
|
| 49 |
+clone hg code.google.com/p/go.net 84a4013f96e0 |
|
| 50 |
+ |
|
| 51 |
+clone hg code.google.com/p/gosqlite 74691fb6f837 |
|
| 52 |
+ |
|
| 53 |
+clone git github.com/docker/libtrust d273ef2565ca |
|
| 54 |
+ |
|
| 55 |
+clone git github.com/Sirupsen/logrus v0.6.0 |
|
| 56 |
+ |
|
| 57 |
+# get Go tip's archive/tar, for xattr support and improved performance |
|
| 58 |
+# TODO after Go 1.4 drops, bump our minimum supported version and drop this vendored dep |
|
| 59 |
+if [ "$1" = '--go' ]; then |
|
| 60 |
+ # Go takes forever and a half to clone, so we only redownload it when explicitly requested via the "--go" flag to this script. |
|
| 61 |
+ clone hg code.google.com/p/go 1b17b3426e3c |
|
| 62 |
+ mv src/code.google.com/p/go/src/pkg/archive/tar tmp-tar |
|
| 63 |
+ rm -rf src/code.google.com/p/go |
|
| 64 |
+ mkdir -p src/code.google.com/p/go/src/pkg/archive |
|
| 65 |
+ mv tmp-tar src/code.google.com/p/go/src/pkg/archive/tar |
|
| 66 |
+fi |
|
| 67 |
+ |
|
| 68 |
+clone git github.com/docker/libcontainer 4ae31b6ceb2c2557c9f05f42da61b0b808faa5a4 |
|
| 69 |
+# see src/github.com/docker/libcontainer/update-vendor.sh which is the "source of truth" for libcontainer deps (just like this file) |
|
| 70 |
+rm -rf src/github.com/docker/libcontainer/vendor |
|
| 71 |
+eval "$(grep '^clone ' src/github.com/docker/libcontainer/update-vendor.sh | grep -v 'github.com/codegangsta/cli')" |
|
| 72 |
+# we exclude "github.com/codegangsta/cli" here because it's only needed for "nsinit", which Docker doesn't include |