Browse code

Merge pull request #93 from pmorie/build-doc

Merged by openshift-bot

OpenShift Bot authored on 2014/09/17 05:42:03
Showing 1 changed files
1 1
new file mode 100644
... ...
@@ -0,0 +1,160 @@
0
+Kubernetes Proposal - Build Plugin
1
+==================================
2
+
3
+Problem/Rationale
4
+-----------------
5
+
6
+Kubernetes creates Docker containers from images that were built elsewhere and pushed to a Docker
7
+registry.  Building Docker images is a foundational use-case in Docker-based workflows for
8
+application development and deployment.  Without support for builds in Kubernetes, if a system
9
+administrator wanted a system that could build images, he or she would have to select a pre-existing
10
+build system or write a new one, and then figure out how to deploy and maintain it on or off
11
+Kubernetes. However, in most cases operators would wish to leverage the ability of Kubernetes to
12
+schedule task execution into a pool of available resources, and most build systems would want to
13
+take advantage of that mechanism.   Offering an API for builds also makes Kubernetes a viable
14
+backend for arbitrary third-party Docker image build systems which require resource constrainment
15
+and scheduling capabilities, and allows organizations to orchestrate docker builds from their
16
+existing continuous integration processes.  This is not a core component of Kubernetes, but would
17
+have significant value as a plugin to enable CI/CD flows around Docker images.
18
+
19
+Most build jobs share common characteristics - a set of build context parameters that define the job,
20
+the need to run a certain process to completion, the capture of the logs from that build process,
21
+publishing resources from successful builds, and the final “status” of the build.  In addition, the
22
+image-driven deployment flow that Kubernetes advocates depends on having images available.
23
+
24
+Builds should take advantage of resource restrictions – specifying limitations on things such as CPU
25
+usage, memory usage, and build (pod) execution time – once support for this exists in Kubernetes.
26
+Additionally, builds would become repeatable and consistent (same inputs = same output).
27
+
28
+There are potentially several different types of builds that produce other types of output as well.
29
+This proposal is for adding functionality to Kubernetes to build Docker images.
30
+
31
+Here are some possible user scenarios for builds in Kubernetes:
32
+
33
+1.   As a user of Kubernetes, I want to build an image from a source URL and push it to a registry
34
+     (for eventual deployment in Kubernetes).
35
+2.   As a user of Kubernetes, I want to build an image from a binary input (docker context, artifact)
36
+     and push it to a registry (for eventual deployment in Kubernetes).
37
+3.   As a provider of a service that involves building docker images, I want to offload the resource
38
+     allocation, scheduling, and garbage collection associated with that activity to Kubernetes 
39
+     instead of solving those problems myself.
40
+4.   As a developer of a system which involves building docker images, I want to take advantage of
41
+     Kubernetes to perform the build, but orchestrate from an existing CI in order to integrate with
42
+     my organization’s devops SOPs.
43
+
44
+Example Use: Cloud IDE
45
+----------------------
46
+
47
+Company X offers a docker-based cloud IDE service and needs to build docker images at scale for 
48
+their customers’ hosted projects.  Company X wants a turn-key solution for this that handles
49
+scheduling, resource allocation, and garbage collection.  Using the build API, Company X can 
50
+leverage Kubernetes for the build work and concentrate on solving their core business problems.
51
+
52
+Example Use: Enterprise Devops
53
+------------------------------
54
+
55
+Company Y wants to leverage Kubernetes to build docker images, but their Devops SOPs mandate the
56
+use of a third-party CI server in order to facilitate things like triggering builds when an
57
+upstream project is built and promoting builds when the result is signed off on in the CI server.
58
+Using the build API, company Y implements workflows in the CI server that orchestrate building in
59
+Kubernetes which integrating with their organization’s SOPs.
60
+
61
+Proposed Design
62
+---------------
63
+
64
+Note: The proposed solution requires that run-once containers be implemented in Kubernetes.
65
+
66
+**BuildConfig**
67
+
68
+Add a new BuildConfig type that will be used to record the inputs to a Build. Its fields could include:
69
+
70
+1.  Source URI
71
+2.  Source ref (e.g. git branch)
72
+3.  Image to use to perform the build
73
+4.  Desired image tag
74
+5.  Docker registry URL
75
+
76
+Add appropriate registries and storage for BuildConfig and register /buildConfigs with the apiserver.
77
+
78
+**Build**
79
+
80
+Add a new Build type that will be used to record a build for historical purposes. A Build includes:
81
+
82
+1.  A copy of a BuildConfig (as the standalone BuildConfig could be updated over time and should not
83
+    affect a specific build)
84
+2.  A status field (new, pending, running, complete, failed)
85
+3.  The ID of the Pod associated with this Build
86
+
87
+Add appropriate registries and storage for Build and register /builds with the apiserver.
88
+
89
+**BuildController**
90
+
91
+Add a new BuildController that runs a sync loop to execute builds.
92
+
93
+For newly created builds, the BuildController will assign a pod ID to the build and set the build’s
94
+state to pending. This way, the assignment of the pod ID and pending status is idempotent and won’t
95
+result in two BuildControllers potentially scheduling two different pods for the same build.
96
+
97
+For pending builds, the BuildController will attempt to create a pod to perform the build. If the
98
+creation succeeds, it sets the build’s status to pending. If the pod already exists, that means
99
+another BuildController already processed this build in a pending state, resulting in a no-op. Any
100
+other pod creation error would result in the build’s status being set to failed.
101
+
102
+It may be desirable to support variations in the pod descriptor used to create the build pod. As
103
+such, it could be possible for plugins/extensions to register additional build pod definitions.
104
+Examples of variations include a builder that runs `docker build` as well as a builder that uses
105
+the Source-To-Images (sti) tool (https://github.com/openshift/geard/tree/master/cmd/sti).
106
+
107
+For running builds, the BuildController will monitor the status of the pod. If the pod is still
108
+running and the build has exceeded its allotted execution time, the BuildController will consider
109
+it failed. If the pod is terminated, the BuildController will examine the exit codes for each of
110
+the pod’s containers. If any exit code is non-zero, the build is marked as failed. Otherwise, it
111
+is considered complete (successful).
112
+
113
+Once the build has reached a terminal state (complete or failed), the BuildController will delete
114
+the pod associated with the build. In the future, it will be desirable to keep a record of the
115
+pod’s containers’ logs but that is out of scope of this proposal.
116
+
117
+Docker Daemon Location: Use the minion’s Docker socket
118
+------------------------------------------------------
119
+
120
+With this approach, a pod containing a single container–a build container–would be created. The
121
+minion’s Docker socket would be bind mounted into the build container. The build container would
122
+execute the build command (e.g. `docker build`) and all interaction with Docker would be using the
123
+host’s (minion’s) Docker daemon.
124
+
125
+**Pros**
126
+
127
+1.  Reduces number of Docker daemons required
128
+2.  Minimizes image storage requirements
129
+
130
+**Cons**
131
+
132
+1.  Not possible to constrain resources per-user
133
+2.  Containers created during the build are created outside the scope of / not managed by Kubernetes
134
+3.  Containers created during the build don’t have the build container as their parent process, making
135
+    container cleanup more difficult
136
+
137
+Docker Daemon Location: Docker-in-Docker
138
+----------------------------------------
139
+
140
+With this approach, a pod containing a single container–a build container–would be created. The
141
+build container would launch its own Docker daemon in the background, and then it would execute
142
+the build command (e.g. `docker build`) and all interaction with Docker would be using the
143
+container’s own (private) Docker daemon.
144
+
145
+**Pros**
146
+
147
+1.  Build process resources can be constrained to the user’s acceptable limits (cgroups)
148
+2.  Containers created during the build have the build container as their parent process, making
149
+    container cleanup trivial
150
+
151
+**Cons**
152
+
153
+1.  Requires a privileged container as running the Docker daemon (even as Docker-in-Docker) requires
154
+    more privileges than a non-privileged container offers
155
+2.  No easy way to share storage of images/layers among build containers, requiring each 
156
+    Docker-in-Docker instance to store its own unique, full copy of any image(s) downloaded during
157
+    the build process.  A caching proxy running on the minion could at least minimize the number of
158
+    times an image is pulled from a remote registry, but that doesn’t eliminate the need for each 
159
+    build container to have its own copy of the images.
0 160
\ No newline at end of file