Browse code

Refactor dind

Maru Newby authored on 2016/09/23 11:36:07
Showing 23 changed files
... ...
@@ -228,27 +228,29 @@ Follow these steps to ensure that virtual box interfaces are unmanaged:
228 228
 === Develop and test using a docker-in-docker cluster
229 229
 
230 230
 It's possible to run an OpenShift multinode cluster on a single host
231
-via docker-in-docker (dind).  Cluster creation is cheaper since each
232
-node is a container instead of a VM.  This was implemented primarily
233
-to support multinode network testing, but may prove useful for other
234
-use cases.
231
+thanks to docker-in-docker (dind).  Cluster creation is cheaper since
232
+each node is a container instead of a VM.  This was initially
233
+implemented to support multinode network testing, but has proven
234
+useful for development as well.
235 235
 
236
-To run a dind cluster in a VM, follow steps 1-3 of the Vagrant
237
-instructions and then execute the following:
236
+Prerequisites:
238 237
 
239
-        $ export OPENSHIFT_DIND_DEV_CLUSTER=true
240
-        $ vagrant up
238
+1. Host running docker and with SELinux disabled.
239
+2. An environment with the tools necessary to build origin.
240
+3. A clone of the origin repo.
241 241
 
242
-Bringing up the VM for the first time will take a while due to the
243
-overhead of package installation, building docker images, and building
244
-openshift.  Assuming the 'vagrant up' command completes without error,
245
-a dind OpenShift cluster should now be running on the VM.  To access
246
-the cluster, login to the VM:
242
+From the root of the origin repo, run the following command to launch
243
+a new cluster:
247 244
 
248
-        $ vagrant ssh
245
+        # -b to build origin, -i to build images
246
+        $ hack/dind-cluster.sh start -b -i
247
+
248
+Once the cluster is up, source the cluster's rc file to configure the
249
+environment to use it:
249 250
 
250
-Once on the VM, the 'oc' and 'openshift' commands can be used to
251
-interact with the cluster:
251
+        $ . dind-openshift.rc
252
+
253
+Now the 'oc' command can be used to interact with the cluster:
252 254
 
253 255
         $ oc get nodes
254 256
 
... ...
@@ -272,22 +274,12 @@ at the top of the dind-cluster.sh script.
272 272
 
273 273
 Attempting to start a cluster when one is already running will result
274 274
 in an error message from docker indicating that the named containers
275
-already exist.  To redeploy a cluster after making changes, use the
276
-'start' and 'stop' or 'restart' commands.  OpenShift is always built
277
-as part of the dind cluster deployment initiated by 'start' or
278
-'restart'.
279
-
280
-By default the cluster will consist of a master and 2 nodes.  The
281
-OPENSHIFT_NUM_MINIONS environment variable can be used to override the
282
-default of 2 nodes.
283
-
284
-Containers are torn down on stop and restart, but the root of the
285
-origin repo is mounted to /data in each container to allow for a
286
-persistent installation target.
287
-
288
-While it is possible to run a dind cluster on any host (not just a
289
-vagrant VM), it is recommended to consider the warnings at the top of
290
-the dind-cluster.sh script.
275
+already exist.  To redeploy a cluster use the 'start' command with the
276
+'-r' flag to remove an existing cluster.
277
+
278
+While it is possible to run a dind cluster directly on a linux host,
279
+it is recommended to consider the warnings at the top of the
280
+dind-cluster.sh script.
291 281
 
292 282
 ==== Testing networking with docker-in-docker
293 283
 
... ...
@@ -1,11 +1,9 @@
1 1
 #!/bin/bash
2 2
 
3
-# WARNING: The script modifies the host on which it is run.  It loads
4
-# the openvwitch and br_netfilter modules and sets
5
-# net.bridge.bridge-nf-call-iptables=0.  Consider creating dind
6
-# clusters in a VM if this modification is undesirable:
7
-#
8
-#   OPENSHIFT_DIND_DEV_CLUSTER=1 vagrant up'
3
+# WARNING: The script modifies the host that docker is running on.  It
4
+# attempts to load the overlay, openvswitch and br_netfilter modules
5
+# and sets net.bridge.bridge-nf-call-iptables=0.  If this modification
6
+# is undesirable consider running docker in a VM.
9 7
 #
10 8
 # Overview
11 9
 # ========
... ...
@@ -19,7 +17,7 @@
19 19
 # Dependencies
20 20
 # ------------
21 21
 #
22
-# This script has been tested on Fedora 21, but should work on any
22
+# This script has been tested on Fedora 24, but should work on any
23 23
 # release.  Docker is assumed to be installed.  At this time,
24 24
 # boot2docker is not supported.
25 25
 #
... ...
@@ -34,20 +32,12 @@
34 34
 # -----------------------
35 35
 #
36 36
 # By default, a dind openshift cluster stores its configuration
37
-# (openshift.local.*) in /tmp/openshift-dind-cluster/openshift.  Since
38
-# configuration is stored in a different location than a
39
-# vagrant-deployed cluster (which stores configuration in the root of
40
-# the origin tree), vagrant and dind clusters can run simultaneously
41
-# without conflict.  It's also possible to run multiple dind clusters
42
-# simultaneously by overriding the instance prefix.  The following
43
-# command would ensure configuration was stored at
44
-# /tmp/openshift-dind/cluster/my-cluster:
45
-#
46
-#    OPENSHIFT_INSTANCE_PREFIX=my-cluster hack/dind-cluster.sh [command]
37
+# (openshift.local.*) in /tmp/openshift-dind-cluster/openshift.  It's
38
+# possible to run multiple dind clusters simultaneously by overriding
39
+# the instance prefix.  The following command would ensure
40
+# configuration was stored at /tmp/openshift-dind/cluster/my-cluster:
47 41
 #
48
-# It is also possible to specify an entirely different configuration path:
49
-#
50
-#    OPENSHIFT_CONFIG_ROOT=[path] hack/dind-cluster.sh [command]
42
+#    OPENSHIFT_CLUSTER_ID=my-cluster hack/dind-cluster.sh [command]
51 43
 #
52 44
 # Suggested Workflow
53 45
 # ------------------
... ...
@@ -56,10 +46,6 @@
56 56
 # breaking golang changes, the 'restart' command will ensure that an
57 57
 # existing cluster is cleaned up before deploying a new cluster.
58 58
 #
59
-# When only making non-breaking changes to golang code, the 'redeploy'
60
-# command avoids restarting the cluster.  'redeploy' rebuilds the
61
-# openshift binaries and deploys them to the existing cluster.
62
-#
63 59
 # Running Tests
64 60
 # -------------
65 61
 #
... ...
@@ -71,175 +57,61 @@ set -o errexit
71 71
 set -o nounset
72 72
 set -o pipefail
73 73
 
74
-DIND_MANAGEMENT_SCRIPT=true
75
-
76
-source $(dirname "${BASH_SOURCE}")/../contrib/vagrant/provision-config.sh
77
-
78
-# Enable xtrace for container script invocations if it is enabled
79
-# for this script.
80
-BASH_CMD=
81
-if set +o | grep -q '\-o xtrace'; then
82
-  BASH_CMD="bash -x"
83
-fi
84
-
85
-DOCKER_CMD=${DOCKER_CMD:-"sudo docker"}
86
-
87
-# Override the default CONFIG_ROOT path with one that is
88
-# cluster-specific.
89
-TMPDIR="${TMPDIR:-"/tmp"}"
90
-CONFIG_ROOT=${OPENSHIFT_CONFIG_ROOT:-${TMPDIR}/openshift-dind-cluster/${INSTANCE_PREFIX}}
91
-
92
-DEPLOY_SSH=${OPENSHIFT_DEPLOY_SSH:-true}
93
-
94
-DEPLOYED_CONFIG_ROOT="/config"
95
-
96
-DEPLOYED_ROOT="/data/src/github.com/openshift/origin"
97
-
98
-SCRIPT_ROOT="${DEPLOYED_ROOT}/contrib/vagrant"
99
-
100
-function check-selinux() {
101
-  if [[ "$(getenforce)" = "Enforcing" ]]; then
102
-    >&2 echo "Error: This script is not compatible with SELinux enforcing mode."
103
-    exit 1
104
-  fi
105
-}
106
-
107
-DIND_IMAGE="openshift/dind"
108
-BUILD_IMAGES="${OPENSHIFT_DIND_BUILD_IMAGES:-1}"
109
-
110
-function build-image() {
111
-  local build_root=$1
112
-  local image_name=$2
113
-
114
-  pushd "${build_root}" > /dev/null
115
-    ${DOCKER_CMD} build -t "${image_name}" .
116
-  popd > /dev/null
117
-}
118
-
119
-function build-images() {
120
-  # Building images is done by default but can be disabled to allow
121
-  # separation of image build from cluster creation.
122
-  if [[ "${BUILD_IMAGES}" = "1" ]]; then
123
-    echo "Building container images"
124
-    build-image "${OS_ROOT}/images/dind" "${DIND_IMAGE}"
125
-  fi
126
-}
127
-
128
-function get-docker-ip() {
129
-  local cid=$1
130
-
131
-  ${DOCKER_CMD} inspect --format '{{ .NetworkSettings.IPAddress }}' "${cid}"
132
-}
133
-
134
-function docker-exec-script() {
135
-    local cid=$1
136
-    local cmd=$2
137
-
138
-    ${DOCKER_CMD} exec -t "${cid}" ${BASH_CMD} ${cmd}
139
-}
74
+source "$(dirname "${BASH_SOURCE}")/lib/init.sh"
75
+source "${OS_ROOT}/images/dind/node/openshift-dind-lib.sh"
140 76
 
141 77
 function start() {
78
+  local origin_root=$1
79
+  local config_root=$2
80
+  local deployed_config_root=$3
81
+  local cluster_id=$4
82
+  local network_plugin=$5
83
+  local wait_for_cluster=$6
84
+  local node_count=$7
85
+
142 86
   # docker-in-docker's use of volumes is not compatible with SELinux
143 87
   check-selinux
144 88
 
145
-  echo "Configured network plugin: ${NETWORK_PLUGIN}"
146
-
147
-  # TODO(marun) - perform these operations in a container for boot2docker compat
148
-  echo "Ensuring compatible host configuration"
149
-  sudo modprobe openvswitch
150
-  sudo modprobe br_netfilter 2> /dev/null || true
151
-  sudo sysctl -w net.bridge.bridge-nf-call-iptables=0 > /dev/null
152
-  # overlayfs, if available, will be faster than vfs
153
-  sudo modprobe overlay 2> /dev/null || true
154
-  mkdir -p "${CONFIG_ROOT}"
155
-
156
-  if [[ "${SKIP_BUILD}" = "true" ]]; then
157
-    echo "WARNING: Skipping image build due to OPENSHIFT_SKIP_BUILD=true"
158
-  else
159
-    build-images
160
-  fi
89
+  echo "Starting dind cluster '${cluster_id}' with plugin '${network_plugin}'"
161 90
 
162
-  ## Create containers
163
-  echo "Launching containers"
164
-  local root_volume="-v ${OS_ROOT}:${DEPLOYED_ROOT}"
165
-  local config_volume="-v ${CONFIG_ROOT}:${DEPLOYED_CONFIG_ROOT}"
166
-  local volumes="${root_volume} ${config_volume}"
167
-  # systemd requires RTMIN+3 to shutdown properly
168
-  local stop="--stop-signal=$(kill -l RTMIN+3)"
169
-  local base_run_cmd="${DOCKER_CMD} run -dt ${stop} ${volumes}"
170
-
171
-  local master_cid="$(${base_run_cmd} --privileged --name="${MASTER_NAME}" \
172
-      --hostname="${MASTER_NAME}" "${DIND_IMAGE}")"
173
-  local master_ip="$(get-docker-ip "${master_cid}")"
174
-
175
-  local node_cids=()
176
-  local node_ips=()
91
+  # Ensuring compatible host configuration
92
+  #
93
+  # Running in a container ensures that the docker host will be affected even
94
+  # if docker is running remotely.  The openshift/dind-node image was chosen
95
+  # due to its having sysctl installed.
96
+  ${DOCKER_CMD} run --privileged --net=host --rm -v /lib/modules:/lib/modules \
97
+                openshift/dind-node bash -e -c \
98
+                '/usr/sbin/sysctl -w net.bridge.bridge-nf-call-iptables=0 > /dev/null;
99
+                /usr/sbin/modprobe openvswitch;
100
+                /usr/sbin/modprobe overlay 2> /dev/null || true;
101
+                /usr/sbin/modprobe br_netfilter 2> /dev/null || true;'
102
+
103
+  # Initialize the cluster config path
104
+  mkdir -p "${config_root}"
105
+  echo "OPENSHIFT_NETWORK_PLUGIN=${network_plugin}" > "${config_root}/network-plugin"
106
+  copy-runtime "${origin_root}" "${config_root}/"
107
+
108
+  local volumes="-v ${config_root}:${deployed_config_root}"
109
+  local run_cmd="${DOCKER_CMD} run -dt ${volumes}  --privileged"
110
+
111
+  # Create containers
112
+  ${run_cmd} --name="${MASTER_NAME}" --hostname="${MASTER_NAME}" "${MASTER_IMAGE}" > /dev/null
177 113
   for name in "${NODE_NAMES[@]}"; do
178
-    local cid="$(${base_run_cmd} --privileged --name="${name}" \
179
-        --hostname="${name}" "${DIND_IMAGE}")"
180
-    node_cids+=( "${cid}" )
181
-    node_ips+=( "$(get-docker-ip "${cid}")" )
114
+    ${run_cmd} --name="${name}" --hostname="${name}" "${NODE_IMAGE}" > /dev/null
182 115
   done
183
-  node_ips="$(os::provision::join , ${node_ips[@]})"
184
-
185
-  ## Provision containers
186
-  local args="${master_ip} ${NODE_COUNT} ${node_ips} ${INSTANCE_PREFIX} \
187
--n ${NETWORK_PLUGIN}"
188
-  if [[ "${SKIP_BUILD}" = "true" ]]; then
189
-      args="${args} -s"
190
-  fi
191 116
 
192
-  echo "Provisioning ${MASTER_NAME}"
193
-  local cmd="${SCRIPT_ROOT}/provision-master.sh ${args} -c \
194
-${DEPLOYED_CONFIG_ROOT}"
195
-  docker-exec-script "${master_cid}" "${cmd}"
196
-
197
-  if [[ "${DEPLOY_SSH}" = "true" ]]; then
198
-    ${DOCKER_CMD} exec -t "${master_cid}" ssh-keygen -N '' -q -f /root/.ssh/id_rsa
199
-    cmd="cat /root/.ssh/id_rsa.pub"
200
-    local public_key="$(${DOCKER_CMD} exec -t "${master_cid}" ${cmd})"
201
-    cmd="cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys"
202
-    ${DOCKER_CMD} exec -t "${master_cid}" ${cmd}
203
-    ${DOCKER_CMD} exec -t "${master_cid}" systemctl start sshd
204
-  fi
205
-
206
-  # Ensure that all users (e.g. outside the container) have read-write
207
-  # access to the openshift configuration.  Security shouldn't be a
208
-  # concern for dind since it should only be used for dev and test.
209
-  local openshift_config_path="${CONFIG_ROOT}/openshift.local.config"
210
-  find "${openshift_config_path}" -exec sudo chmod ga+rw {} \;
211
-  find "${openshift_config_path}" -type d -exec sudo chmod ga+x {} \;
212
-
213
-  for (( i=0; i < ${#node_cids[@]}; i++ )); do
214
-    local node_index=$((i + 1))
215
-    local cid="${node_cids[$i]}"
216
-    local name="${NODE_NAMES[$i]}"
217
-    echo "Provisioning ${name}"
218
-    cmd="${SCRIPT_ROOT}/provision-node.sh ${args} -i ${node_index} -c \
219
-${DEPLOYED_CONFIG_ROOT}"
220
-    docker-exec-script "${cid}" "${cmd}"
221
-
222
-    if [[ "${DEPLOY_SSH}" = "true" ]]; then
223
-      ${DOCKER_CMD} exec -t "${cid}" mkdir -p /root/.ssh
224
-      cmd="echo ${public_key} > /root/.ssh/authorized_keys"
225
-      ${DOCKER_CMD} exec -t "${cid}" bash -c "${cmd}"
226
-      ${DOCKER_CMD} exec -t "${cid}" systemctl start sshd
227
-    fi
228
-  done
229
-
230
-  local rc_file="dind-${INSTANCE_PREFIX}.rc"
231
-  local admin_config="$(os::provision::get-admin-config ${CONFIG_ROOT})"
232
-  local bin_path="$(os::build::get-bin-output-path "${OS_ROOT}")"
117
+  local rc_file="dind-${cluster_id}.rc"
118
+  local admin_config
119
+  admin_config="$(get-admin-config "${CONFIG_ROOT}")"
120
+  local bin_path
121
+  bin_path="$(os::build::get-bin-output-path "${OS_ROOT}")"
233 122
   cat >"${rc_file}" <<EOF
234 123
 export KUBECONFIG=${admin_config}
235 124
 export PATH=\$PATH:${bin_path}
236 125
 EOF
237 126
 
238
-  # Disable the sdn node as late as possible to allow time for the
239
-  # node to register itself.
240
-  if [[ "${SDN_NODE}" = "true" ]]; then
241
-    os::provision::disable-node "${OS_ROOT}" "${CONFIG_ROOT}" \
242
-        "${SDN_NODE_NAME}"
127
+  if [[ -n "${wait_for_cluster}" ]]; then
128
+    wait-for-cluster "${config_root}" "${node_count}"
243 129
   fi
244 130
 
245 131
   if [[ "${KUBECONFIG:-}" != "${admin_config}"  ||
... ...
@@ -255,82 +127,116 @@ cluster's rc file to configure the bash environment:
255 255
 }
256 256
 
257 257
 function stop() {
258
-  echo "Cleaning up docker-in-docker containers"
258
+  local config_root=$1
259
+  local cluster_id=$2
260
+
261
+  echo "Stopping dind cluster '${cluster_id}'"
259 262
 
260
-  local master_cid="$(${DOCKER_CMD} ps -qa --filter "name=${MASTER_NAME}")"
263
+  local master_cid
264
+  master_cid="$(${DOCKER_CMD} ps -qa --filter "name=${MASTER_NAME}")"
261 265
   if [[ "${master_cid}" ]]; then
262
-    ${DOCKER_CMD} rm -f "${master_cid}"
266
+    ${DOCKER_CMD} rm -f "${master_cid}" > /dev/null
263 267
   fi
264 268
 
265
-  local node_cids="$(${DOCKER_CMD} ps -qa --filter "name=${NODE_PREFIX}")"
269
+  local node_cids
270
+  node_cids="$(${DOCKER_CMD} ps -qa --filter "name=${NODE_PREFIX}")"
266 271
   if [[ "${node_cids}" ]]; then
267 272
     node_cids=(${node_cids//\n/ })
268 273
     for cid in "${node_cids[@]}"; do
269
-      ${DOCKER_CMD} rm -f "${cid}"
274
+      ${DOCKER_CMD} rm -f "${cid}" > /dev/null
270 275
     done
271 276
   fi
272 277
 
273
-  echo "Cleanup up configuration to avoid conflict with a future cluster"
278
+  # Cleaning up configuration to avoid conflict with a future cluster
274 279
   # The container will have created configuration as root
275
-  sudo rm -rf ${CONFIG_ROOT}/openshift.local.*
280
+  sudo rm -rf "${config_root}"/openshift.local.etcd
281
+  sudo rm -rf "${config_root}"/openshift.local.config
276 282
 
277 283
   # Cleanup orphaned volumes
278 284
   #
279 285
   # See: https://github.com/jpetazzo/dind#important-warning-about-disk-usage
280 286
   #
281
-  echo "Cleaning up volumes used by docker-in-docker daemons"
282
-  local volume_ids=$(${DOCKER_CMD} volume ls -qf dangling=true)
283
-  if [[ "${volume_ids}" ]]; then
284
-    ${DOCKER_CMD} volume rm ${volume_ids}
287
+  for volume in $( ${DOCKER_CMD} volume ls -qf dangling=true ); do
288
+    ${DOCKER_CMD} volume rm "${volume}" > /dev/null
289
+  done
290
+}
291
+
292
+function check-selinux() {
293
+  if [[ "$(getenforce)" = "Enforcing" ]]; then
294
+    >&2 echo "Error: This script is not compatible with SELinux enforcing mode."
295
+    exit 1
285 296
   fi
286 297
 }
287 298
 
288
-# Build and deploy openshift binaries to an existing cluster
289
-function redeploy() {
290
-  local node_service="openshift-node"
299
+function get-network-plugin() {
300
+  local plugin=$1
291 301
 
292
-  ${DOCKER_CMD} exec -t "${MASTER_NAME}" bash -c "\
293
-. ${SCRIPT_ROOT}/provision-util.sh ; \
294
-os::provision::build-origin ${DEPLOYED_ROOT} ${SKIP_BUILD}"
302
+  local subnet_plugin="redhat/openshift-ovs-subnet"
303
+  local multitenant_plugin="redhat/openshift-ovs-multitenant"
304
+  local default_plugin="${subnet_plugin}"
295 305
 
296
-  echo "Stopping ${MASTER_NAME} service(s)"
297
-  ${DOCKER_CMD} exec -t "${MASTER_NAME}" systemctl stop "${MASTER_NAME}"
298
-  if [[ "${SDN_NODE}" = "true" ]]; then
299
-    ${DOCKER_CMD} exec -t "${MASTER_NAME}" systemctl stop "${node_service}"
300
-  fi
301
-  echo "Updating ${MASTER_NAME} binaries"
302
-  ${DOCKER_CMD} exec -t "${MASTER_NAME}" bash -c \
303
-". ${SCRIPT_ROOT}/provision-util.sh ; \
304
-os::provision::install-cmds ${DEPLOYED_ROOT}"
305
-  echo "Starting ${MASTER_NAME} service(s)"
306
-  ${DOCKER_CMD} exec -t "${MASTER_NAME}" systemctl start "${MASTER_NAME}"
307
-  if [[ "${SDN_NODE}" = "true" ]]; then
308
-    ${DOCKER_CMD} exec -t "${MASTER_NAME}" systemctl start "${node_service}"
306
+  if [[ "${plugin}" != "${subnet_plugin}" &&
307
+          "${plugin}" != "${multitenant_plugin}" &&
308
+          "${plugin}" != "cni" ]]; then
309
+    if [[ -n "${plugin}" ]]; then
310
+      >&2 echo "Invalid network plugin: ${plugin}"
311
+    fi
312
+    plugin="${default_plugin}"
309 313
   fi
314
+  echo "${plugin}"
315
+}
310 316
 
311
-  for node_name in "${NODE_NAMES[@]}"; do
312
-    echo "Stopping ${node_name} service"
313
-    ${DOCKER_CMD} exec -t "${node_name}" systemctl stop "${node_service}"
314
-    echo "Updating ${node_name} binaries"
315
-    ${DOCKER_CMD} exec -t "${node_name}" bash -c "\
316
-. ${SCRIPT_ROOT}/provision-util.sh ; \
317
-os::provision::install-cmds ${DEPLOYED_ROOT}"
318
-    echo "Starting ${node_name} service"
319
-    ${DOCKER_CMD} exec -t "${node_name}" systemctl start "${node_service}"
320
-  done
317
+function get-docker-ip() {
318
+  local cid=$1
319
+
320
+  ${DOCKER_CMD} inspect --format '{{ .NetworkSettings.IPAddress }}' "${cid}"
321
+}
322
+
323
+function get-admin-config() {
324
+  local config_root=$1
325
+
326
+  echo "${config_root}/openshift.local.config/master/admin.kubeconfig"
327
+}
328
+
329
+function copy-runtime() {
330
+  local origin_root=$1
331
+  local target=$2
332
+
333
+  cp "$(os::build::find-binary openshift)" "${target}"
334
+  local osdn_plugin_path="${origin_root}/pkg/sdn/plugin/bin"
335
+  cp "${osdn_plugin_path}/openshift-sdn-ovs" "${target}"
336
+  cp "${osdn_plugin_path}/openshift-sdn-docker-setup.sh" "${target}"
337
+}
338
+
339
+function wait-for-cluster() {
340
+  local config_root=$1
341
+  local expected_node_count=$2
342
+
343
+  # Increment the node count to ensure that the sdn node also reports readiness
344
+  (( expected_node_count++ ))
345
+
346
+  local kubeconfig
347
+  kubeconfig="$(get-admin-config "${config_root}")"
348
+  local oc
349
+  oc="$(os::build::find-binary oc)"
350
+
351
+  local msg="${expected_node_count} nodes to report readiness"
352
+  local condition="nodes-are-ready ${kubeconfig} ${oc} ${expected_node_count}"
353
+  os::util::wait-for-condition "${msg}" "${condition}"
321 354
 }
322 355
 
323 356
 function nodes-are-ready() {
324
-  local oc="$(os::build::find-binary oc)"
325
-  local kc="$(os::provision::get-admin-config ${CONFIG_ROOT})"
357
+  local kubeconfig=$1
358
+  local oc=$2
359
+  local expected_node_count=$3
360
+
361
+  # TODO - do not count any node whose name matches the master node e.g. 'node-master'
326 362
   read -d '' template <<'EOF'
327 363
 {{range $item := .items}}
328
-  {{if not .spec.unschedulable}}
329
-    {{range .status.conditions}}
330
-      {{if eq .type "Ready"}}
331
-        {{if eq .status "True"}}
332
-          {{printf "%s\\n" $item.metadata.name}}
333
-        {{end}}
364
+  {{range .status.conditions}}
365
+    {{if eq .type "Ready"}}
366
+      {{if eq .status "True"}}
367
+        {{printf "%s\\n" $item.metadata.name}}
334 368
       {{end}}
335 369
     {{end}}
336 370
   {{end}}
... ...
@@ -338,42 +244,130 @@ function nodes-are-ready() {
338 338
 EOF
339 339
   # Remove formatting before use
340 340
   template="$(echo "${template}" | tr -d '\n' | sed -e 's/} \+/}/g')"
341
-  local count="$("${oc}" --config="${kc}" get nodes \
342
-                         --template "${template}" | wc -l)"
343
-  test "${count}" -ge "${NODE_COUNT}"
341
+  local count
342
+  count="$("${oc}" --config="${kubeconfig}" get nodes \
343
+                   --template "${template}" 2> /dev/null | \
344
+                   wc -l)"
345
+  test "${count}" -ge "${expected_node_count}"
344 346
 }
345 347
 
346
-function wait-for-cluster() {
347
-  local msg="nodes to register with the master"
348
-  local condition="nodes-are-ready"
349
-  os::provision::wait-for-condition "${msg}" "${condition}"
348
+function build-images() {
349
+  local origin_root=$1
350
+
351
+  echo "Building container images"
352
+  build-image "${origin_root}/images/dind/" "${BASE_IMAGE}"
353
+  build-image "${origin_root}/images/dind/node" "${NODE_IMAGE}"
354
+  build-image "${origin_root}/images/dind/master" "${MASTER_IMAGE}"
355
+}
356
+
357
+function build-image() {
358
+  local build_root=$1
359
+  local image_name=$2
360
+
361
+  pushd "${build_root}" > /dev/null
362
+    ${DOCKER_CMD} build -t "${image_name}" .
363
+  popd > /dev/null
350 364
 }
351 365
 
366
+DOCKER_CMD=${DOCKER_CMD:-"sudo docker"}
367
+
368
+CLUSTER_ID="${OPENSHIFT_CLUSTER_ID:-openshift}"
369
+
370
+TMPDIR="${TMPDIR:-"/tmp"}"
371
+CONFIG_ROOT="${OPENSHIFT_CONFIG_ROOT:-${TMPDIR}/openshift-dind-cluster/${CLUSTER_ID}}"
372
+DEPLOYED_CONFIG_ROOT="/data"
373
+
374
+MASTER_NAME="${CLUSTER_ID}-master"
375
+NODE_PREFIX="${CLUSTER_ID}-node-"
376
+NODE_COUNT=2
377
+NODE_NAMES=()
378
+for (( i=1; i<=NODE_COUNT; i++ )); do
379
+  NODE_NAMES+=( "${NODE_PREFIX}${i}" )
380
+done
381
+
382
+BASE_IMAGE="openshift/dind"
383
+NODE_IMAGE="openshift/dind-node"
384
+MASTER_IMAGE="openshift/dind-master"
385
+
352 386
 case "${1:-""}" in
353 387
   start)
354
-    start
388
+    BUILD=
389
+    BUILD_IMAGES=
390
+    WAIT_FOR_CLUSTER=1
391
+    NETWORK_PLUGIN=
392
+    REMOVE_EXISTING_CLUSTER=
393
+    OPTIND=2
394
+    while getopts ":bin:rs" opt; do
395
+      case $opt in
396
+        b)
397
+          BUILD=1
398
+          ;;
399
+        i)
400
+          BUILD_IMAGES=1
401
+          ;;
402
+        n)
403
+          NETWORK_PLUGIN="${OPTARG}"
404
+          ;;
405
+        r)
406
+          REMOVE_EXISTING_CLUSTER=1
407
+          ;;
408
+        s)
409
+          WAIT_FOR_CLUSTER=
410
+          ;;
411
+        \?)
412
+          echo "Invalid option: -${OPTARG}" >&2
413
+          exit 1
414
+          ;;
415
+        :)
416
+          echo "Option -${OPTARG} requires an argument." >&2
417
+          exit 1
418
+          ;;
419
+      esac
420
+    done
421
+
422
+    if [[ -n "${REMOVE_EXISTING_CLUSTER}" ]]; then
423
+      stop "${CONFIG_ROOT}" "${CLUSTER_ID}"
424
+    fi
425
+
426
+    # Build origin if requested or required
427
+    if [[ -n "${BUILD}" || -z "$(os::build::find-binary oc)" ]]; then
428
+      "${OS_ROOT}/hack/build-go.sh"
429
+    fi
430
+
431
+    # Build images if requested or required
432
+    if [[ -n "${BUILD_IMAGES}" ||
433
+            -z "$(${DOCKER_CMD} images -q ${MASTER_IMAGE})" ]]; then
434
+      build-images "${OS_ROOT}"
435
+    fi
436
+
437
+    NETWORK_PLUGIN="$(get-network-plugin "${NETWORK_PLUGIN}")"
438
+    start "${OS_ROOT}" "${CONFIG_ROOT}" "${DEPLOYED_CONFIG_ROOT}" \
439
+          "${CLUSTER_ID}" "${NETWORK_PLUGIN}" "${WAIT_FOR_CLUSTER}" \
440
+          "${NODE_COUNT}" "${NODE_PREFIX}"
355 441
     ;;
356 442
   stop)
357
-    stop
358
-    ;;
359
-  restart)
360
-    stop
361
-    start
362
-    ;;
363
-  redeploy)
364
-    redeploy
443
+    stop "${CONFIG_ROOT}" "${CLUSTER_ID}"
365 444
     ;;
366 445
   wait-for-cluster)
367
-    wait-for-cluster
446
+    wait-for-cluster "${CONFIG_ROOT}" "${NODE_COUNT}"
368 447
     ;;
369 448
   build-images)
370
-    BUILD_IMAGES=1
371
-    build-images
372
-    ;;
373
-  config-host)
374
-    os::provision::set-os-env "${OS_ROOT}" "${CONFIG_ROOT}"
449
+    build-images "${OS_ROOT}"
375 450
     ;;
376 451
   *)
377
-    echo "Usage: $0 {start|stop|restart|redeploy|wait-for-cluster|build-images|config-host}"
452
+    >&2 echo "Usage: $0 {start|stop|wait-for-cluster|build-images}
453
+
454
+start accepts the following arguments:
455
+
456
+ -n [net plugin]   the name of the network plugin to deploy
457
+
458
+ -b                build origin before starting the cluster
459
+
460
+ -i                build container images before starting the cluster
461
+
462
+ -r                remove an existing cluster
463
+
464
+ -s                skip waiting for nodes to become ready
465
+"
378 466
     exit 2
379 467
 esac
... ...
@@ -1,14 +1,36 @@
1 1
 #
2
-# This image is used for running a host of an openshift dev cluster. This image is
3
-# a development support image and should not be used in production environments.
2
+# Image configured with systemd and docker-in-docker.  Useful for
3
+# simulating multinode deployments.
4 4
 #
5 5
 # The standard name for this image is openshift/dind
6 6
 #
7
+# Notes:
8
+#
9
+#  - disable SELinux on the docker host (not compatible with dind)
10
+#
11
+#  - to use the overlay graphdriver, ensure the overlay module is
12
+#    installed on the docker host
13
+#
14
+#      $ modprobe overlay
15
+#
16
+#  - run with --privileged
17
+#
18
+#      $ docker run -d --privileged openshift/dind
19
+#
20
+
7 21
 FROM fedora:24
8 22
 
23
+# Fix 'WARNING: terminal is not fully functional' when TERM=dumb
24
+ENV TERM=xterm
25
+
9 26
 ## Configure systemd to run in a container
27
+
10 28
 ENV container=docker
11 29
 
30
+VOLUME ["/run", "/tmp"]
31
+
32
+STOPSIGNAL SIGRTMIN+3
33
+
12 34
 RUN systemctl mask\
13 35
  auditd.service\
14 36
  console-getty.service\
... ...
@@ -24,34 +46,33 @@ RUN systemctl mask\
24 24
  systemd-udev-trigger.service\
25 25
  systemd-udevd.service\
26 26
  systemd-vconsole-setup.service
27
+RUN cp /usr/lib/systemd/system/dbus.service /etc/systemd/system/;\
28
+ sed -i 's/OOMScoreAdjust=-900//' /etc/systemd/system/dbus.service
27 29
 
28
-RUN cp /usr/lib/systemd/system/dbus.service /etc/systemd/system/; \
29
-  sed -i 's/OOMScoreAdjust=-900//' /etc/systemd/system/dbus.service
30
-
31
-VOLUME ["/run", "/tmp"]
30
+RUN dnf -y update && dnf -y install\
31
+ docker\
32
+ iptables\
33
+ openssh-server\
34
+ && dnf clean all
32 35
 
33
-## Install packages
34
-RUN dnf -y update && dnf -y install git golang hg tar make findutils \
35
-  gcc hostname bind-utils iproute iputils which procps-ng openssh-server \
36
-  # Node-specific packages
37
-  docker openvswitch bridge-utils ethtool iptables-services \
38
-  && dnf clean all
36
+## Configure docker
39 37
 
40
-# sshd should be enabled as needed
41
-RUN systemctl disable sshd.service
38
+RUN systemctl enable docker.service
42 39
 
43 40
 # Default storage to vfs.  overlay will be enabled at runtime if available.
44 41
 RUN echo "DOCKER_STORAGE_OPTIONS=--storage-driver vfs" >\
45 42
  /etc/sysconfig/docker-storage
46 43
 
47
-RUN systemctl enable docker
44
+COPY dind-setup.sh /usr/local/bin
45
+COPY dind-setup.service /etc/systemd/system/
46
+RUN systemctl enable dind-setup.service
48 47
 
49
-VOLUME /var/lib/docker
48
+VOLUME ["/var/lib/docker"]
50 49
 
51
-## Hardlink init to another name to avoid having oci-systemd-hooks
52
-## detect containers using this image as requiring read-only cgroup
53
-## mounts.  dind containers should be run with --privileged to ensure
54
-## cgroups mounted with read-write permissions.
50
+# Hardlink init to another name to avoid having oci-systemd-hooks
51
+# detect containers using this image as requiring read-only cgroup
52
+# mounts.  containers running docker need to be run with --privileged
53
+# to ensure cgroups are mounted with read-write permissions.
55 54
 RUN ln /usr/sbin/init /usr/sbin/dind_init
56 55
 
57 56
 CMD ["/usr/sbin/dind_init"]
... ...
@@ -1,14 +1,36 @@
1 1
 #
2
-# This image is used for running a host of an openshift dev cluster. This image is
3
-# a development support image and should not be used in production environments.
2
+# Image configured with systemd and docker-in-docker.  Useful for
3
+# simulating multinode deployments.
4 4
 #
5 5
 # The standard name for this image is openshift/dind
6 6
 #
7
-FROM centos:centos7
7
+# Notes:
8
+#
9
+#  - disable SELinux on the docker host (not compatible with dind)
10
+#
11
+#  - to use the overlay graphdriver, ensure the overlay module is
12
+#    installed on the docker host
13
+#
14
+#      $ modprobe overlay
15
+#
16
+#  - run with --privileged
17
+#
18
+#      $ docker run -d --privileged openshift/dind
19
+#
20
+
21
+FROM centos:systemd
22
+
23
+# Fix 'WARNING: terminal is not fully functional' when TERM=dumb
24
+ENV TERM=xterm
8 25
 
9 26
 ## Configure systemd to run in a container
27
+
10 28
 ENV container=docker
11 29
 
30
+VOLUME ["/run", "/tmp"]
31
+
32
+STOPSIGNAL SIGRTMIN+3
33
+
12 34
 RUN systemctl mask\
13 35
  auditd.service\
14 36
  console-getty.service\
... ...
@@ -24,44 +46,33 @@ RUN systemctl mask\
24 24
  systemd-udev-trigger.service\
25 25
  systemd-udevd.service\
26 26
  systemd-vconsole-setup.service
27
+RUN cp /usr/lib/systemd/system/dbus.service /etc/systemd/system/;\
28
+ sed -i 's/OOMScoreAdjust=-900//' /etc/systemd/system/dbus.service
27 29
 
28
-RUN cp /usr/lib/systemd/system/dbus.service /etc/systemd/system/; \
29
-  sed -i 's/OOMScoreAdjust=-900//' /etc/systemd/system/dbus.service
30
-
31
-VOLUME ["/run", "/tmp"]
32
-
33
-## Install origin repo
34
-RUN INSTALL_PKGS="centos-release-openshift-origin" && \
35
-    yum install -y $INSTALL_PKGS && \
36
-    rpm -V $INSTALL_PKGS && \
37
-    yum clean all
30
+RUN yum -y update && yum -y install\
31
+ docker\
32
+ iptables\
33
+ openssh-server\
34
+ && yum clean all
38 35
 
39
-## Install packages
40
-RUN INSTALL_PKGS="git golang mercurial tar make findutils \
41
-      gcc hostname bind-utils iproute iputils which procps-ng openssh-server \
42
-      docker openvswitch bridge-utils ethtool iptables-services" && \
43
-    yum install -y $INSTALL_PKGS && \
44
-    rpm -V --nofiles $INSTALL_PKGS && \
45
-    yum clean all
36
+## Configure docker
46 37
 
47
-# sshd should be enabled as needed
48
-RUN systemctl disable sshd.service
38
+RUN systemctl enable docker.service
49 39
 
50
-## Configure dind
51
-ENV DIND_COMMIT 81aa1b507f51901eafcfaad70a656da376cf937d
52
-RUN curl -fL "https://raw.githubusercontent.com/docker/docker/${DIND_COMMIT}/hack/dind" \
53
-  -o /usr/local/bin/dind && chmod +x /usr/local/bin/dind
54
-RUN mkdir -p /etc/systemd/system/docker.service.d
55
-COPY dind.conf /etc/systemd/system/docker.service.d/
40
+# Default storage to vfs.  overlay will be enabled at runtime if available.
41
+RUN echo "DOCKER_STORAGE_OPTIONS=--storage-driver vfs" >\
42
+ /etc/sysconfig/docker-storage
56 43
 
57
-RUN systemctl enable docker
44
+COPY dind-setup.sh /usr/local/bin
45
+COPY dind-setup.service /etc/systemd/system/
46
+RUN systemctl enable dind-setup.service
58 47
 
59
-VOLUME /var/lib/docker
48
+VOLUME ["/var/lib/docker"]
60 49
 
61
-## Hardlink init to another name to avoid having oci-systemd-hooks
62
-## detect containers using this image as requiring read-only cgroup
63
-## mounts.  dind containers should be run with --privileged to ensure
64
-## cgroups mounted with read-write permissions.
50
+# Hardlink init to another name to avoid having oci-systemd-hooks
51
+# detect containers using this image as requiring read-only cgroup
52
+# mounts.  containers running docker need to be run with --privileged
53
+# to ensure cgroups are mounted with read-write permissions.
65 54
 RUN ln /usr/sbin/init /usr/sbin/dind_init
66 55
 
67 56
 CMD ["/usr/sbin/dind_init"]
68 57
new file mode 100644
... ...
@@ -0,0 +1,10 @@
0
+[Unit]
1
+Description=docker-in-docker setup
2
+Before=docker.service
3
+
4
+[Service]
5
+Type=oneshot
6
+ExecStart=/usr/bin/bash /usr/local/bin/dind-setup.sh
7
+
8
+[Install]
9
+RequiredBy=docker.service
0 10
new file mode 100644
... ...
@@ -0,0 +1,45 @@
0
+#!/bin/bash
1
+
2
+set -o errexit
3
+set -o nounset
4
+set -o pipefail
5
+
6
+# Enable overlayfs for dind if it can be tested to work.
7
+function enable-overlay-storage() {
8
+  local storage_dir=${1:-/var/lib/docker}
9
+
10
+  local msg=""
11
+
12
+  if grep -q overlay /proc/filesystems; then
13
+    # Smoke test the overlay filesystem:
14
+
15
+    # 1. create smoke dir in the storage dir being mounted
16
+    local d="${storage_dir}/smoke"
17
+    mkdir -p "${d}/upper" "${d}/lower" "${d}/work" "${d}/mount"
18
+
19
+    # 2. try to mount an overlay fs on top of the smoke dir
20
+    local overlay_works=1
21
+    mount -t overlay overlay\
22
+          -o"lowerdir=${d}/lower,upperdir=${d}/upper,workdir=${d}/work"\
23
+          "${d}/mount" &&\
24
+    # 3. try to write a file in the overlay mount
25
+          echo foo > "${d}/mount/probe" || overlay_works=
26
+
27
+    umount -f "${d}/mount" || true
28
+    rm -rf "${d}" || true
29
+
30
+    if [[ -n "${overlay_works}" ]]; then
31
+      msg="Enabling overlay storage for docker-in-docker"
32
+      sed -i -e 's+vfs+overlay+' /etc/sysconfig/docker-storage
33
+    fi
34
+  fi
35
+
36
+  if [[ -z "${msg}" ]]; then
37
+    msg="WARNING: Unable to enable overlay storage for docker-in-docker"
38
+  fi
39
+
40
+  echo "${msg}"
41
+}
42
+
43
+mount --make-shared /
44
+enable-overlay-storage
0 45
new file mode 100644
... ...
@@ -0,0 +1,29 @@
0
+#
1
+# This image is for the master of an openshift dind dev cluster.
2
+#
3
+# The standard name for this image is openshift/dind-master
4
+#
5
+
6
+FROM openshift/dind-node
7
+
8
+# Disable iptables on the master since it will prevent access to the
9
+# openshift api from outside the master.
10
+RUN systemctl disable iptables.service
11
+
12
+COPY openshift-generate-master-config.sh /usr/local/bin/
13
+
14
+COPY openshift-disable-master-node.sh /usr/local/bin/
15
+COPY openshift-disable-master-node.service /etc/systemd/system/
16
+RUN systemctl enable openshift-disable-master-node.service
17
+
18
+COPY openshift-get-hosts.sh /usr/local/bin/
19
+COPY openshift-add-to-hosts.sh /usr/local/bin/
20
+COPY openshift-remove-from-hosts.sh /usr/local/bin/
21
+COPY openshift-sync-etc-hosts.service /etc/systemd/system/
22
+RUN systemctl enable openshift-sync-etc-hosts.service
23
+
24
+COPY openshift-master.service /etc/systemd/system/
25
+RUN systemctl enable openshift-master.service
26
+
27
+RUN mkdir -p /etc/systemd/system/openshift-node.service.d
28
+COPY master-node.conf /etc/systemd/system/openshift-node.service.d/
0 29
new file mode 100644
... ...
@@ -0,0 +1,3 @@
0
+[Unit]
1
+Requires=network.target openshift-master.service
2
+After=docker.target network.target openshift-master.service
0 3
new file mode 100755
... ...
@@ -0,0 +1,15 @@
0
+#!/bin/bash
1
+
2
+set -o errexit
3
+set -o nounset
4
+set -o pipefail
5
+
6
+ENTRY="$2\t$1"
7
+if ! grep -qP "${ENTRY}" /etc/hosts; then
8
+  # The ip + hostname combination are not present
9
+  if grep -qP "\t$1$" /etc/hosts; then
10
+    # The hostname is present with a different ip
11
+    /usr/local/bin/openshift-remove-from-hosts.sh "$1"
12
+  fi
13
+  echo -e "${ENTRY}" >> /etc/hosts
14
+fi
0 15
new file mode 100644
... ...
@@ -0,0 +1,11 @@
0
+[Unit]
1
+Description=Disable scheduling for master node
2
+Requires=openshift-node.service
3
+After=openshift-node.service
4
+
5
+[Service]
6
+Type=oneshot
7
+ExecStart=/usr/local/bin/openshift-disable-master-node.sh
8
+
9
+[Install]
10
+WantedBy=openshift-node.service
0 11
new file mode 100755
... ...
@@ -0,0 +1,28 @@
0
+#!/bin/bash
1
+
2
+set -o errexit
3
+set -o nounset
4
+set -o pipefail
5
+
6
+source /usr/local/bin/openshift-dind-lib.sh
7
+
8
+function is-node-registered() {
9
+  local config=$1
10
+  local node_name=$2
11
+
12
+  /usr/local/bin/oc --config="${config}" get nodes "${node_name}" &> /dev/null
13
+}
14
+
15
+function disable-node() {
16
+  local config=$1
17
+  local node_name=$2
18
+
19
+  local msg="${node_name} to register with the master"
20
+  local condition="is-node-registered ${config} ${node_name}"
21
+  os::util::wait-for-condition "${msg}" "${condition}" "${OS_WAIT_FOREVER}"
22
+
23
+  echo "Disabling scheduling for node ${node_name}"
24
+  /usr/local/bin/osadm --config="${config}" manage-node "${node_name}" --schedulable=false > /dev/null
25
+}
26
+
27
+disable-node /data/openshift.local.config/master/admin.kubeconfig "$(hostname)-node"
0 28
new file mode 100755
... ...
@@ -0,0 +1,40 @@
0
+#!/bin/bash
1
+
2
+set -o errexit
3
+set -o nounset
4
+set -o pipefail
5
+
6
+# Should set OPENSHIFT_NETWORK_PLUGIN
7
+source /data/network-plugin
8
+
9
+function ensure-master-config() {
10
+  local config_path="/data/openshift.local.config"
11
+  local master_path="${config_path}/master"
12
+  local config_file="${master_path}/master-config.yaml"
13
+
14
+  if [[ -f "${config_file}" ]]; then
15
+    # Config has already been generated
16
+    return
17
+  fi
18
+
19
+  local ip_addr
20
+  ip_addr="$(ip addr | grep inet | grep eth0 | awk '{print $2}' | sed -e 's+/.*++')"
21
+  local name
22
+  name="$(hostname)"
23
+
24
+  /usr/local/bin/openshift admin ca create-master-certs \
25
+    --overwrite=false \
26
+    --cert-dir="${master_path}" \
27
+    --master="https://${ip_addr}:8443" \
28
+    --hostnames="${ip_addr},${name}"
29
+
30
+  /usr/local/bin/openshift start master --write-config="${master_path}" \
31
+    --master="https://${ip_addr}:8443" \
32
+    --network-plugin="${OPENSHIFT_NETWORK_PLUGIN}"
33
+
34
+  # ensure the configuration is readable outside of the container
35
+  find "${config_path}" -exec chmod ga+rw {} \;
36
+  find "${config_path}" -type d -exec chmod ga+x {} \;
37
+}
38
+
39
+ensure-master-config
0 40
new file mode 100755
... ...
@@ -0,0 +1,2 @@
0
+#!/bin/sh
1
+grep '\-node' /etc/hosts | awk '{print $2}'
0 2
new file mode 100644
... ...
@@ -0,0 +1,15 @@
0
+[Unit]
1
+Description=OpenShift Master
2
+Requires=network.target
3
+After=docker.target network.target
4
+
5
+[Service]
6
+ExecStartPre=/usr/local/bin/openshift-generate-master-config.sh
7
+ExecStart=/usr/local/bin/openshift start master --loglevel=5 \
8
+  --config=/data/openshift.local.config/master/master-config.yaml
9
+WorkingDirectory=/data
10
+Restart=on-failure
11
+RestartSec=10s
12
+
13
+[Install]
14
+WantedBy=multi-user.target
0 15
new file mode 100755
... ...
@@ -0,0 +1,5 @@
0
+#!/bin/sh
1
+grep -vP "\t$1$" /etc/hosts > /tmp/newhosts
2
+# mv -f won't work due to the way docker mounts /etc/hosts
3
+cat /tmp/newhosts > /etc/hosts
4
+rm /tmp/newhosts
0 5
new file mode 100644
... ...
@@ -0,0 +1,16 @@
0
+[Unit]
1
+Description=Synchronize /etc/hosts with cluster node state
2
+Requires=openshift-master.service
3
+After=openshift-master.service
4
+
5
+[Service]
6
+ExecStart=/usr/local/bin/oc observe nodes -a '{ .status.addresses[0].address }' \
7
+  --config=/data/openshift.local.config/master/admin.kubeconfig \
8
+  --names /usr/local/bin/openshift-get-hosts.sh \
9
+  --delete /usr/local/bin/openshift-remove-from-hosts.sh \
10
+  -- /usr/local/bin/openshift-add-to-hosts.sh
11
+Restart=on-failure
12
+RestartSec=10s
13
+
14
+[Install]
15
+WantedBy=openshift-master.service
0 16
new file mode 100644
... ...
@@ -0,0 +1,63 @@
0
+#
1
+# This is the base image for nodes of an openshift dind dev cluster.
2
+#
3
+# The standard name for this image is openshift/dind-node
4
+#
5
+
6
+FROM openshift/dind
7
+
8
+## Install packages
9
+RUN dnf -y update && dnf -y install\
10
+ bind-utils\
11
+ findutils\
12
+ hostname\
13
+ iproute\
14
+ iputils\
15
+ procps-ng\
16
+ tar\
17
+ which\
18
+ # Node-specific packages
19
+ bridge-utils\
20
+ ethtool\
21
+ iptables-services\
22
+ openvswitch\
23
+ && dnf clean all
24
+
25
+# A default deny firewall (either iptables or firewalld) is
26
+# installed by default on non-cloud fedora and rhel, so all
27
+# network plugins need to be able to work with one enabled.
28
+RUN systemctl enable iptables.service
29
+
30
+# Ensure that master-to-kubelet communication will work with iptables
31
+COPY iptables /etc/sysconfig/
32
+
33
+COPY openshift-generate-node-config.sh /usr/local/bin/
34
+COPY openshift-dind-lib.sh /usr/local/bin/
35
+
36
+RUN mkdir -p /etc/systemd/system/docker.service.d
37
+COPY docker-sdn-ovs.conf /etc/systemd/system/docker.service.d/
38
+
39
+RUN systemctl enable openvswitch
40
+
41
+COPY openshift-node.service /etc/systemd/system/
42
+RUN systemctl enable openshift-node.service
43
+# Ensure the working directory for the unit file exists
44
+RUN mkdir -p /var/lib/origin
45
+
46
+# Symlink from the data path intended to be mounted as a volume to
47
+# make reloading easy.  Revisit if/when dind becomes useful for more
48
+# than dev/test.
49
+RUN ln -sf /data/openshift-sdn-ovs /usr/local/bin/ && \
50
+    ln -sf /data/openshift-sdn-docker-setup.sh /usr/local/bin/ && \
51
+    ln -sf /data/openshift /usr/local/bin/ && \
52
+    ln -sf /data/openshift /usr/local/bin/oc && \
53
+    ln -sf /data/openshift /usr/local/bin/oadm && \
54
+    ln -sf /data/openshift /usr/local/bin/osc && \
55
+    ln -sf /data/openshift /usr/local/bin/osadm && \
56
+    ln -sf /data/openshift /usr/local/bin/kubectl && \
57
+    ln -sf /data/openshift /usr/local/bin/openshift-deploy && \
58
+    ln -sf /data/openshift /usr/local/bin/openshift-docker-build && \
59
+    ln -sf /data/openshift /usr/local/bin/openshift-sti-build && \
60
+    ln -sf /data/openshift /usr/local/bin/openshift-f5-router
61
+
62
+ENV KUBECONFIG /data/openshift.local.config/master/admin.kubeconfig
0 63
new file mode 100644
... ...
@@ -0,0 +1,2 @@
0
+[Service]
1
+EnvironmentFile=-/run/openshift-sdn/docker-network
0 2
new file mode 100644
... ...
@@ -0,0 +1,16 @@
0
+# sample configuration for iptables service
1
+# you can edit this manually or use system-config-firewall
2
+# please do not ask us to add additional ports/services to this default configuration
3
+*filter
4
+:INPUT ACCEPT [0:0]
5
+:FORWARD ACCEPT [0:0]
6
+:OUTPUT ACCEPT [0:0]
7
+-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
8
+-A INPUT -p icmp -j ACCEPT
9
+-A INPUT -i lo -j ACCEPT
10
+# Ensure the master can talk to the kubelet
11
+-A INPUT -p tcp -m state --state NEW -m tcp --dport 10250 -j ACCEPT
12
+-A INPUT -p tcp -m state --state NEW -m tcp --dport 22 -j ACCEPT
13
+-A INPUT -j REJECT --reject-with icmp-host-prohibited
14
+-A FORWARD -j REJECT --reject-with icmp-host-prohibited
15
+COMMIT
0 16
new file mode 100644
... ...
@@ -0,0 +1,53 @@
0
+#!/bin/bash
1
+#
2
+# This library holds utility functions used by dind deployment and images.  Since
3
+# it is intended to be distributed standalone in dind images, it cannot depend
4
+# on any functions outside of this file.
5
+
6
+# os::util::wait-for-condition blocks until the provided condition becomes true
7
+#
8
+# Globals:
9
+#  None
10
+# Arguments:
11
+#  - 1: message indicating what conditions is being waited for (e.g. 'config to be written')
12
+#  - 2: a string representing an eval'able condition.  When eval'd it should not output
13
+#       anything to stdout or stderr.
14
+#  - 3: optional timeout in seconds.  If not provided, defaults to 60s.  If OS_WAIT_FOREVER
15
+#       is provided, wait forever.
16
+# Returns:
17
+#  1 if the condition is not met before the timeout
18
+readonly OS_WAIT_FOREVER=-1
19
+function os::util::wait-for-condition() {
20
+  local msg=$1
21
+  # condition should be a string that can be eval'd.  When eval'd, it
22
+  # should not output anything to stderr or stdout.
23
+  local condition=$2
24
+  local timeout=${3:-60}
25
+
26
+  local start_msg="Waiting for ${msg}"
27
+  local error_msg="[ERROR] Timeout waiting for ${msg}"
28
+
29
+  local counter=0
30
+  while ! ${condition}; do
31
+    if [[ "${counter}" = "0" ]]; then
32
+      echo "${start_msg}"
33
+    fi
34
+
35
+    if [[ "${counter}" -lt "${timeout}" ||
36
+            "${timeout}" = "${OS_WAIT_FOREVER}" ]]; then
37
+      counter=$((counter + 1))
38
+      if [[ "${timeout}" != "${OS_WAIT_FOREVER}" ]]; then
39
+        echo -n '.'
40
+      fi
41
+      sleep 1
42
+    else
43
+      echo -e "\n${error_msg}"
44
+      return 1
45
+    fi
46
+  done
47
+
48
+  if [[ "${counter}" != "0" && "${timeout}" != "${OS_WAIT_FOREVER}" ]]; then
49
+    echo -e '\nDone'
50
+  fi
51
+}
52
+readonly -f os::util::wait-for-condition
0 53
new file mode 100755
... ...
@@ -0,0 +1,61 @@
0
+#!/bin/bash
1
+
2
+set -o errexit
3
+set -o nounset
4
+set -o pipefail
5
+
6
+source /usr/local/bin/openshift-dind-lib.sh
7
+# Should set OPENSHIFT_NETWORK_PLUGIN
8
+source /data/network-plugin
9
+
10
+function ensure-node-config() {
11
+  local deployed_config_path="/var/lib/origin/openshift.local.config/node"
12
+  local deployed_config_file="${deployed_config_path}/node-config.yaml"
13
+
14
+  if [[ -f "${deployed_config_file}" ]]; then
15
+    # Config has already been deployed
16
+    return
17
+  fi
18
+
19
+  local config_path="/data/openshift.local.config"
20
+  local host
21
+  host="$(hostname)"
22
+  if [[ -f "/etc/systemd/system/openshift-master.service" ]]; then
23
+    host="${host}-node"
24
+  fi
25
+  local node_config_path="${config_path}/node-${host}"
26
+  local config_file="${node_config_path}/node-config.yaml"
27
+
28
+  # If the node config has not been generated
29
+  if [[ ! -f "${config_file}" ]]; then
30
+    local master_config_path="${config_path}/master"
31
+
32
+    # Wait for the master to generate its config
33
+    local condition="test -f ${master_config_path}/admin.kubeconfig"
34
+    os::util::wait-for-condition "admin config" "${condition}" "${OS_WAIT_FOREVER}"
35
+
36
+    local master_host
37
+    master_host="$(grep server "${master_config_path}/admin.kubeconfig" | grep -v localhost | awk '{print $2}')"
38
+
39
+    local ip_addr
40
+    ip_addr="$(ip addr | grep inet | grep eth0 | awk '{print $2}' | sed -e 's+/.*++')"
41
+
42
+    /usr/local/bin/openshift admin create-node-config \
43
+      --node-dir="${config_path}" \
44
+      --node="${host}" \
45
+      --master="${master_host}" \
46
+      --hostnames="${host},${ip_addr}" \
47
+      --network-plugin="${OPENSHIFT_NETWORK_PLUGIN}" \
48
+      --node-client-certificate-authority="${master_config_path}/ca.crt" \
49
+      --certificate-authority="${master_config_path}/ca.crt" \
50
+      --signer-cert="${master_config_path}/ca.crt" \
51
+      --signer-key="${master_config_path}/ca.key" \
52
+      --signer-serial="${master_config_path}/ca.serial.txt"
53
+  fi
54
+
55
+  # Deploy the node config
56
+  mkdir -p "${deployed_config_path}"
57
+  cp -r "${config_path}"/* "${deployed_config_path}"
58
+}
59
+
60
+ensure-node-config
0 61
new file mode 100644
... ...
@@ -0,0 +1,15 @@
0
+[Unit]
1
+Description=OpenShift Node
2
+Requires=network.target
3
+After=docker.target network.target
4
+
5
+[Service]
6
+ExecStartPre=/usr/local/bin/openshift-generate-node-config.sh
7
+ExecStart=/usr/local/bin/openshift start node --loglevel=5 \
8
+  --config=/var/lib/origin/openshift.local.config/node/node-config.yaml
9
+WorkingDirectory=/var/lib/origin
10
+Restart=on-failure
11
+RestartSec=5
12
+
13
+[Install]
14
+WantedBy=multi-user.target
... ...
@@ -110,26 +110,14 @@ function save-artifacts() {
110 110
 function deploy-cluster() {
111 111
   local name=$1
112 112
   local plugin=$2
113
-  local isolation=$3
114
-  local log_dir=$4
113
+  local log_dir=$3
115 114
 
116 115
   os::log::info "Launching a docker-in-docker cluster for the ${name} plugin"
117
-  export OPENSHIFT_NETWORK_PLUGIN="${plugin}"
118 116
   export OPENSHIFT_CONFIG_ROOT="${BASETMPDIR}/${name}"
119
-  export OPENSHIFT_NETWORK_ISOLATION="${isolation}"
120
-  # Images have already been built
121
-  export OPENSHIFT_DIND_BUILD_IMAGES=0
122 117
   DIND_CLEANUP_REQUIRED=1
123 118
 
124 119
   local exit_status=0
125
-
126
-  # Restart instead of start to ensure that an existing test cluster is
127
-  # always torn down.
128
-  if ${CLUSTER_CMD} restart; then
129
-    if ! ${CLUSTER_CMD} wait-for-cluster; then
130
-      exit_status=1
131
-    fi
132
-  else
120
+  if ! ${CLUSTER_CMD} start -r -n "${plugin}"; then
133 121
     exit_status=1
134 122
   fi
135 123
 
... ...
@@ -161,12 +149,13 @@ function test-osdn-plugin() {
161 161
   local deployment_failed=
162 162
   local tests_failed=
163 163
 
164
-  if deploy-cluster "${name}" "${plugin}" "${isolation}" "${log_dir}"; then
164
+  if deploy-cluster "${name}" "${plugin}" "${log_dir}"; then
165 165
     os::log::info "Running networking e2e tests against the ${name} plugin"
166 166
     export TEST_REPORT_FILE_NAME="${name}-junit"
167 167
 
168 168
     local kubeconfig="$(get-kubeconfig-from-root "${OPENSHIFT_CONFIG_ROOT}")"
169 169
     if ! TEST_REPORT_FILE_NAME=networking_${name}_${isolation} \
170
+         OPENSHIFT_NETWORK_ISOLATION="${isolation}" \
170 171
          run-extended-tests "${kubeconfig}" "${log_dir}/test.log"; then
171 172
       tests_failed=1
172 173
       os::log::error "e2e tests failed for plugin: ${plugin}"
... ...
@@ -187,7 +176,7 @@ function test-osdn-plugin() {
187 187
   os::log::info "Shutting down docker-in-docker cluster for the ${name} plugin"
188 188
   ${CLUSTER_CMD} stop
189 189
   DIND_CLEANUP_REQUIRED=0
190
-  rmdir "${OPENSHIFT_CONFIG_ROOT}"
190
+  rm -rf "${OPENSHIFT_CONFIG_ROOT}"
191 191
 }
192 192
 
193 193
 
... ...
@@ -279,12 +268,12 @@ else
279 279
 
280 280
   # Use a unique instance prefix to ensure the names of the test dind
281 281
   # containers will not clash with the names of non-test containers.
282
-  export OPENSHIFT_INSTANCE_PREFIX="nettest"
282
+  export OPENSHIFT_CLUSTER_ID="nettest"
283 283
   # TODO(marun) Discover these names instead of hard-coding
284 284
   CONTAINER_NAMES=(
285
-    "${OPENSHIFT_INSTANCE_PREFIX}-master"
286
-    "${OPENSHIFT_INSTANCE_PREFIX}-node-1"
287
-    "${OPENSHIFT_INSTANCE_PREFIX}-node-2"
285
+    "${OPENSHIFT_CLUSTER_ID}-master"
286
+    "${OPENSHIFT_CLUSTER_ID}-node-1"
287
+    "${OPENSHIFT_CLUSTER_ID}-node-2"
288 288
   )
289 289
 
290 290
   os::util::environment::setup_tmpdir_vars "test-extended/networking"
... ...
@@ -327,8 +316,5 @@ else
327 327
   # to be tested.
328 328
   test-osdn-plugin "subnet" "redhat/openshift-ovs-subnet" "false" || true
329 329
 
330
-  # Avoid unnecessary go builds for subsequent deployments
331
-  export OPENSHIFT_SKIP_BUILD=true
332
-
333 330
   test-osdn-plugin "multitenant" "redhat/openshift-ovs-multitenant" "true" || true
334 331
 fi