HACKING.md
c6b0a135
 Hacking on OpenShift
 ====================
d5b5894e
 
e3bb13a1
 ## Building a Release
 
eec32b40
 To build an OpenShift release you run the `hack/build-release.sh` script on a
 system with Docker, which will create a build environment image and then
 execute a cross platform Go build within it. The build output will be copied
 to `_output/releases` as a set of tars containing each version. It will also
 build the `openshift/origin-base` image which is the common parent image for all
 OpenShift Docker images.
e3bb13a1
 
cfe5cd17
     $ make release
e3bb13a1
 
ac49969c
 NOTE:  Only committed code is built.
 
 To build the base and release images, run:
e3bb13a1
 
     $ hack/build-base-images.sh
 
39e50d3e
 Once a release has been created, it can be pushed:
 
     $ hack/push-release.sh
 
eec32b40
 To cut an official tag release, we generally use the images built by
 [ci.openshift.redhat.com](https://ci.openshift.redhat.com)
0b8e6dad
 under the devenv_ami job.
39e50d3e
 
0b8e6dad
 1. Create a new git tag `git tag vX.X.X -a -m "vX.X.X" HEAD`
eec32b40
 2. Push the tag to GitHub `git push origin --tags` where `origin` is
 `github.com/openshift/origin.git`
0b8e6dad
 3. Run the "devenv_ami" job
eec32b40
 4. Once the images are pushed to the repository, run `OS_PUSH_TAG="vX.X.X"
 hack/push-release.sh`. Your tag must match the Git tag.
39e50d3e
 5. Upload the binary artifacts generated by that build to GitHub release page
eec32b40
 6. Send an email to the dev list, including the important changes prior to the
 release.
39e50d3e
 
 We generally cut a release before disruptive changes land.
 
e3bb13a1
 
d5b5894e
 ## Test Suites
 
eec32b40
 OpenShift uses three levels of testing - unit tests, integration test, and
 end-to-end tests (much like Kubernetes).
d5b5894e
 
 ### Unit tests
 
eec32b40
 Unit tests follow standard Go conventions and are intended to test the behavior
 and output of a single package in isolation. All code is expected to be easily
 testable with mock interfaces and stubs, and when they are not it usually means
 that there's a missing interface or abstraction in the code. A unit test should
 focus on testing that branches and error conditions are properly returned and
 that the interface and code flows work as described. Unit tests can depend
 on other packages but should not depend on other components (an API test should
 not be writing to etcd).
d5b5894e
 
eec32b40
 The unit tests for an entire package should not take more than 0.5s to run, and
 if they do, are probably not really unit tests or need to be rewritten to avoid
 sleeps or pauses. Coverage on a unit test should be above 70% unless the units
 are a special case.
d5b5894e
 
eec32b40
 See `pkg/template/generator` for examples of unit tests. Unit tests should
 follow Go conventions.
d5b5894e
 
c6b0a135
 Run the unit tests with:
d5b5894e
 
     $ hack/test-go.sh
 
f3cf4a9e
 or an individual package using its relative path with:
d5b5894e
 
     $ hack/test-go.sh pkg/build
 
cd23de03
 or an individual package and all packages nested under it:
 
     $ hack/test-go.sh pkg/build/...
 
d5b5894e
 To run only a certain regex of tests in a package, use:
 
     $ hack/test-go.sh pkg/build -test.run=SynchronizeBuildRunning
 
f3cf4a9e
 To get verbose output for the above example:
d5b5894e
 
     $ hack/test-go.sh pkg/build -test.run=SynchronizeBuildRunning -v
 
 To run all tests with verbose output:
 
f3cf4a9e
     $ hack/test-go.sh -v
 
eec32b40
 To change the timeout for individual unit tests, which defaults to one minute,
 use:
f3cf4a9e
 
     $ TIMEOUT=<timeout> hack/test-go.sh
d5b5894e
 
a034c68a
 To enable running the kubernetes unit tests:
 
f3cf4a9e
     $ TEST_KUBE=true hack/test-go.sh
a034c68a
 
 To run unit test for an individual kubernetes package:
 
f0eadcca
     $ hack/test-go.sh vendor/k8s.io/kubernetes/examples
f3cf4a9e
 
eec32b40
 To change the coverage mode, which is `-cover -covermode=atomic` by default,
 use:
a034c68a
 
f3cf4a9e
     $ COVERAGE_SPEC="<some coverage specification>" hack/test-go.sh
c6b0a135
 
f3cf4a9e
 To turn off coverage calculation, which is on by default, use:
 
     $ COVERAGE_SPEC= hack/test-go.sh
c6b0a135
 
 To run tests without the go race detector, which is on by default, use:
 
f3cf4a9e
     $ DETECT_RACES= hack/test-go.sh
c6b0a135
 
1f02b8ea
 To create a line coverage report, set `OUTPUT_COVERAGE` to a path where the
 report should be stored. For example:
dfde1e63
 
f3cf4a9e
     $ COVERAGE_OUTPUT_DIR='/path/to/dir' hack/test-go.sh
f83bd4a3
 
 After that you can open `/path/to/dir/coverage.html` in the browser.
dfde1e63
 
eec32b40
 To generate a jUnit XML report from the output of the tests, and see a summary
 of the test output instead of the full test output, use:
77e4352d
 
f3cf4a9e
     $ JUNIT_REPORT=true hack/test-go.sh
 
eec32b40
 `hack/test-go.sh` cannot generate jUnit XML and a coverage report for all
 packages at once. If you require both, you must call `hack/test-go.sh` twice.
f3cf4a9e
 
d5b5894e
 ### Integration tests
 
eec32b40
 Integration tests cover multiple components acting together (generally, 2 or
 3). These tests should focus on ensuring that naturally related components work
 correctly.  They should not be extensively testing branches or error conditions
 inside packages (that's what unit tests do), but they should validate that
 important success and error paths work across layers (especially when errors
 are being converted from lower level errors). Integration tests should not be
 testing details of the inter-component connections - API tests should not test
 that the JSON serialized to the wire is correctly converted back and forth (unit test
 responsibility), but they should test that those connections have the expected
 outcomes. The underlying goal of integration tests is to wire together the most
 important components in isolation. Integration tests should be as fast as possible
 in order to enable them to be run repeatedly during testing.  Integration tests
 that take longer than 0.5s are probably trying to test too much together and
 should be reorganized into separate tests.  Integration tests should generally
 be written so that they are starting from a clean slate, but if that involves
 costly setup those components should be tested in isolation.
 
 We break integration tests into two categories, those that use Docker and those
 that do not.  In general, high-level components that depend on the behavior of code
 running inside a Docker container should have at least one or two integration tests
 that test all the way down to Docker, but those should be part of their own
 test suite.  Testing the API and high level API functions should generally
 not depend on calling into Docker. They are denoted by special test tags and
 should be in their own files so we can selectively build them.
 
 All integration tests are located under `test/integration/*`. All integration
 tests must set the `integration` build tag at the top of their source file,
 and also declare whether they need etcd with the `etcd` build tag and whether
 they need Docker with the `docker` build tag. For special function sets please
 create sub directories like `test/integration/deployimages`.
d5b5894e
 
 Run the integration tests with:
 
     $ hack/test-integration.sh
 
eec32b40
 The script launches an instance of etcd and then invokes the integration tests.
 If you need to execute a subset of integration tests, run:
d5b5894e
 
a57ea3ac
     $ hack/test-integration.sh <regex>
97036f78
 
eec32b40
 Where `<regex>` is some regular expression that matches the names of all
 of the tests you want to run. The regular expression is passed into `grep -E`,
 so ensure that the syntax or features you use are supported. The default
 regular expression used
 is `Test`, which matches all tests.
d5b5894e
 
eec32b40
 Each integration function is executed in its own process so that it cleanly
 shuts down any background
 goroutines. You will not be able to run more than a single test within a single
 process.
5bcc9176
 
eec32b40
 There is a CLI integration test suite which covers general non-Docker
 functionality of the CLI tool
d5b5894e
 working against the API. Run it with:
 
     $ hack/test-cmd.sh
 
eec32b40
 This suite comprises many smaller suites, which are found under `test/cmd` and
abdfed11
 can be run individually by specifying a regex filter, passed through `grep -E`
eec32b40
 like with integration tests above:
064493cb
 
     $ hack/test-cmd.sh <regex>
 
468e99e9
 During development, you can run a file `test/cmd/*.sh` directly to test against
eec32b40
 a running server. This can speed up the feedback loop considerably. All
 `test/cmd/*` tests are expected
 to be executable repeatedly - please file bugs if a test needs cleanup before
 running.
468e99e9
 
 For example, start the OpenShift server, create a "test" project, and then run
 `oc new-app` tests against the server:
 
     $ oc new-project test
     $ test/cmd/newapp.sh
 
eec32b40
 In order to run the suite, generate a jUnit XML report, and see a summary of
 the test suite, use:
eb767642
 
     $ JUNIT_REPORT='true' hack/test-cmd.sh
468e99e9
 
cfe5cd17
 ### End-to-End (e2e) and Extended Tests
d5b5894e
 
eec32b40
 The final test category is end to end tests (e2e) which should verify a long
 set of flows in the product as a user would see them.  Two e2e tests should not
 overlap more than 10% of function, and are not intended to test error conditions
 in detail. The project
 examples should be driven by e2e tests. e2e tests can also test external
 components working together.
d5b5894e
 
eec32b40
 The end-to-end suite is currently implemented primarily in Bash, but will be
 folded into the extended suite (located in test/extended) over time.
abdfed11
 The extended suite is closer to the upstream Kubernetes e2e suite and
eec32b40
 tests the full behavior of a running system.
d5b5894e
 
 Run the end to end tests with:
 
b4cdf3d7
     $ hack/test-end-to-end.sh
d5b5894e
 
cfe5cd17
 Run the extended tests with:
 
     $ test/extended/core.sh
 
eec32b40
 This suite comprises many smaller suites, which are found under `test/extended`
 and can be run individually by specifying `--ginkgo.focus` and a regex filter:
70cf5544
 
     $ test/extended/core.sh --ginkgo.focus=<regex>
 
736030d6
 In addition, the extended tests can be ran against an existing OpenShift
 cluster:
 
     $ KUBECONFIG=/path/to/admin.kubeconfig TEST_ONLY=true test/extended/core.sh --ginkgo.focus=<regex>
 
eec32b40
 Extended tests should be Go tests in the `test/extended` directory that use
 the Ginkgo library. They must be able to be run remotely, and cannot depend on
 any local interaction with the filesystem or Docker.
109e0a9d
 
 More information about running extended tests can be found in
abdfed11
 [test/extended/README](https://github.com/openshift/origin/blob/master/test/extended/README.md).
cfe5cd17
 
f04c63ff
 ## Installing Godep
 
eec32b40
 OpenShift and Kubernetes use [Godep](https://github.com/tools/godep) for
 dependency management.  Godep allows versions of dependent packages to be
 locked at a specific commit by *vendoring* them (checking a copy of them into
 `vendor/`).
 This means that everything you need for OpenShift is checked into this
 repository.
 
 To install `godep` locally run:
f04c63ff
 
     $ go get github.com/tools/godep
 
 If you are not updating packages you should not need godep installed.
 
109e0a9d
 ## Cherry-picking an upstream commit into Origin: Why, how, and when.
 
eec32b40
 Origin carries patches inside of vendor/ on top of each rebase.
 Thus, origin carries upstream patches in two ways.
109e0a9d
 
 1. *periodic rebases* against a Kubernetes commit.
eec32b40
 Eventually, any code you have in upstream kubernetes will land in Openshift
 via this mechanism.
109e0a9d
 
eec32b40
 2. Cherry-picked patches for important *bug fixes*.  We really try to
 limit feature back-porting entirely.
109e0a9d
 
 ### Manually
 
eec32b40
 You can manually try to cherry pick a commit (by using git apply). This can
 easily be done in a couple of steps.
109e0a9d
 
eec32b40
 - wget the patch, i.e. `wget -O /tmp/mypatch
 https://github.com/kubernetes/kubernetes/pull/34624.patch`
109e0a9d
 - PATCH=/tmp/mypatch git apply --directory vendor/k8s.io/kubernetes $PATCH
 
 If this fails, then it's possible you may need to pick multiple commits.
 
eec32b40
 ### For Openshift newcomers: Pick my kubernetes fix into Openshift vs. wait for
 the next rebase?
109e0a9d
 
eec32b40
 Assuming you read the bullets above... If your patch is really far behind, for
abdfed11
 example, if there have been 5 commits modifying the directory you care about,
eec32b40
 cherry picking will be increasingly difficult and you should consider waiting
 for the next rebase, which will likely include the commit you care about, or at
 least decrease the amount of cherry picks you need to do to merge.
109e0a9d
 
eec32b40
 To really know the answer, you need to know *how many commits behind you are in
 a particular directory*, often.
109e0a9d
 
 To do this, just use git log, like so (using pkg/scheduler/ as an example).
 
 ```
eec32b40
 MYDIR=pkg/scheduler/algorithm git log --oneline --
 vendor/k8s.io/kubernetes/${MYDIR} | grep UPSTREAM | cut -d' ' -f 4-10 | head -1
109e0a9d
 ```
 
 The commit message printed above will tell you:
eec32b40
 
 - what the LAST commit in Kubernetes was (which effected
 "/pkg/scheduler/algorithm")
 - directory, which will give you an intuition about how "hot" the code you are
 cherry picking is.  If it has changed a lot, recently, then that means you
 probably will want to wait for a rebase to land.
109e0a9d
 
 ### Using hack/cherry-pick
 
eec32b40
 For convenience, you can use `hack/cherry-pick.sh` to generate patches for
 Origin from upstream commits.
109e0a9d
 
eec32b40
 The purpose of this command is to allow you to pull individual commits from a
 local kubernetes repository into origin's vendored kuberenetes in a fully
 automated manner.
109e0a9d
 
eec32b40
 To use this command, be sure to setup remote Pull Request branches in the
 kubernetes repository you are using (i.e. like https://gist.github.com/piscisaureus/3342247).
abdfed11
 Specifically, you will be doing this, to the git config you probably already
eec32b40
 have for kubernetes:
109e0a9d
 
 ```
 [remote "origin"]
         url = https://github.com/kubernetes/kubernetes
         fetch = +refs/heads/*:refs/remotes/origin/*
 	### Add this line
         fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
 ```
25012b65
 
eec32b40
 so that `git show origin/pr/<number>` displays information about your branch
 after a `git fetch`.
 
 You must also have the Kubernetes repository checked out in your GOPATH
 (visible as `../../../k8s.io/kubernetes`),
aa681bf3
 with openshift/kubernetes as a remote and fetched:
 
     $ pushd $GOPATH/src/k8s.io/kubernetes
     $ git remote add openshift https://github.com/openshift/kubernetes.git
     $ git fetch openshift
     $ popd
 
 There must be no modified or uncommitted files in either repository.
25012b65
 
 To pull an upstream commit, run:
 
     $ hack/cherry-pick.sh <pr_number>
 
eec32b40
 This will attempt to create a patch from the current Kube rebase version in
 Origin that contains the commits added in the PR. If the PR has already been
 merged to the Kube version, you'll get an error. If there are conflicts, you'll
 have to resolve them in the upstream repo, then hit ENTER to continue. The end
 result will be a single commit in your Origin repo that contains the changes.
25012b65
 
eec32b40
 If you want to run without a rebase option, set `NO_REBASE=1` before the
 command is run. You can also specify a commit range directly with:
25012b65
 
     $ hack/cherry-pick.sh origin/master...<some_branch>
 
 All upstream commits should have a commit message where the first line is:
 
     UPSTREAM: <PR number|drop|carry>: <short description>
 
eec32b40
 `drop` indicates the commit should be removed during the next rebase. `carry`
 means that the change cannot go into upstream, and we should continue to use it
 during the next rebase.
25012b65
 
eec32b40
 You can also target repositories other than Kube by setting `UPSTREAM_REPO` and
 `UPSTREAM_PACKAGE` env vars.  `UPSTREAM_REPO` should be the full name of the Git
 repo as Go sees it, i.e. `github.com/coreos/etcd`, and `UPSTREAM_PACKAGE` must be
 a package inside that repo that is currently part of the Godeps.json file.  Example:
cf28c106
 
eec32b40
     $ UPSTREAM_REPO=github.com/coreos/etcd UPSTREAM_PACKAGE=store
 hack/cherry-pick.sh <pr_number>
cf28c106
 
eec32b40
 By default `hack/cherry-pick.sh` uses git remote named `origin` to fetch
 kubernetes repository, if your git configuration is different, you can pass the git
 remote name by setting `UPSTREAM_REMOTE` env var:
35cb35a3
 
     $ UPSTREAM_REMOTE=upstream hack/cherry-pick.sh <pr_number>
 
cf28c106
 ## Moving a commit you developed in Origin to an upstream
 
eec32b40
 The `hack/move-upstream.sh` script takes the current feature branch, finds any
 changes to the
 requested upstream project (as defined by `UPSTREAM_REPO` and
 `UPSTREAM_PACKAGE`) that differ from `origin/master`, and then creates a new
 commit in that upstream project on a branch with the same name as your current
 branch.
cf28c106
 
eec32b40
 For example, to upstream a commit to OpenShift source-to-image while working
 from Origin:
cf28c106
 
     $ git checkout my_feature_branch_in_origin
     $ git log --oneline
     70ffe7e Docker and STI builder support binary extraction
     75a22de UPSTREAM: <sti>: Allow prepared directories to be passed to STI
     86eefdd UPSTREAM: 14618: Refactor exec to allow reuse from server
 
     # we want to move our STI changes to upstream
eec32b40
     $ UPSTREAM_REPO=github.com/openshift/source-to-image
 UPSTREAM_PACKAGE=pkg/api hack/move-upstream.sh
cf28c106
     ...
 
eec32b40
     # All changes to source-to-image in Godeps/. are now in a commit UPSTREAMED
 in s2i repo
cf28c106
 
     $ cd ../source-to-image
     $ git log --oneline
     c0029f6 UPSTREAMED
     ... # older commits
 
 The default is to work against Kube.
eec32b40
 go
25012b65
 
f21d79e7
 ## Updating Kubernetes from upstream
 
eec32b40
 There are a few steps involved in rebasing Origin to a new version of
 Kubernetes. We need to make sure
 that not only the Kubernetes packages were updated correctly into `Godeps`, but
 also that *all tests are
 still running without errors* and *code changes, refactorings or the
 inclusion/removal of attributes
46aa1236
 were properly reflected* in the Origin codebase.
f21d79e7
 
 ### 1. Preparation
 
eec32b40
 Before you begin, make sure you have both
 [openshift/origin](https://github.com/openshift/origin) and
 [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) in your
 $GOPATH. You may want to work
797bcb44
 on a separate $GOPATH just for the rebase:
f21d79e7
 
 ```
 $ go get github.com/openshift/origin
83c702b4
 $ go get k8s.io/kubernetes
f21d79e7
 ```
 
cfe5cd17
 You must add the Origin GitHub fork as a remote in your k8s.io/kubernetes repo:
 
 ```
 $ cd $GOPATH/src/k8s.io/kubernetes
 $ git remote add openshift git@github.com:openshift/kubernetes.git
 $ git fetch openshift
 ```
 
eec32b40
 Check out the version of Kubernetes you want to rebase as a branch or tag named
 `stable_proposed` in
3dbf26a7
 [kubernetes/kubernetes](https://github.com/kubernetes/kubernetes). For example,
f21d79e7
 if you are going to rebase the latest `master` of Kubernetes:
 
 ```
83c702b4
 $ cd $GOPATH/src/k8s.io/kubernetes
f21d79e7
 $ git checkout master
 $ git pull
 $ git checkout -b stable_proposed
 ```
 
 ### 2. Rebase Origin to the new Kubernetes version
 
 #### 2.1. First option (preferred): using the rebase-kube.sh script
 
eec32b40
 If all requirements described in *Preparation* were correctly attended, you
 should not have any trouble with rebasing the Kubernetes code using the script
 that automates this process.
f21d79e7
 
 ```
 $ cd $GOPATH/src/github.com/openshift/origin
 $ hack/rebase-kube.sh
 ```
 
eec32b40
 Read over the changes with `git status` and make sure it looks reasonable.
 Check specially the `Godeps/Godeps.json` file to make sure no dependency
 is unintentionally missing.
f21d79e7
 
83c702b4
 Commit using the message `bump(k8s.io/kubernetes):<commit SHA>`, where
eec32b40
 `<commit SHA>` is the commit id for the Kubernetes version we are including in
 our Godeps. It can be found in our `Godeps/Godeps.json` in the declaration of
 any Kubernetes package.
f21d79e7
 
 #### 2.2. Second option: manually
 
eec32b40
 If for any reason you had trouble rebasing using the script, you may need to to
 do it manually.
 After following all requirements described in the *Preparation* topic, you will
 need to run
 `godep restore` from both the Origin and the Kubernetes directories and then
 `godep save ./...`
f21d79e7
 from the Origin directory. Follow these steps:
 
 1. `$ cd $GOPATH/src/github.com/openshift/origin`
eec32b40
 2. `make clean ; godep restore` will restore the package versions specified in
 the `Godeps/Godeps.json` of Origin to your GOPATH.
83c702b4
 2. `$ cd $GOPATH/src/k8s.io/kubernetes`
eec32b40
 3. `$ git checkout stable_proposed` will checkout the desired version of
 Kubernetes as branched in *Preparation*.
 4. `$ godep restore` will restore the package versions specified in the
 `Godeps/Godeps.json` of Kubernetes to your GOPATH.
f21d79e7
 5. `$ cd $GOPATH/src/github.com/openshift/origin`.
eec32b40
 6. `$ make clean ; godep save ./...` will save a list of the checked-out
 dependencies to the file `Godeps/Godeps.json`, and copy their source code into `vendor`.
 7. If in the previous step godep complaints about the checked out revision of a
 package being different  than the wanted revision, this probably means there are
 new packages in Kubernetes that we need to add.  Do a `godep save <pkgname>` with
 the package specified by the error message and then `$ godep save ./...`
f21d79e7
 again.
eec32b40
 8. Read over the changes with `git status` and make sure it looks reasonable.
 Check specially the `Godeps/Godeps.json` file to make sure no dependency is
 unintentionally missing. The whole Godeps directory will be added to version control, including
 `_workspace`.
83c702b4
 9. Commit using the message `bump(k8s.io/kubernetes):<commit SHA>`, where
eec32b40
 `<commit SHA>` is the commit id for the Kubernetes version we are including in
 our Godeps. It can be found in our `Godeps/Godeps.json` in the declaration of
 any Kubernetes package.
f21d79e7
 
eec32b40
 If in the process of rebasing manually you found any corner case not attended
 by the `hack/rebase-kube.sh` script, make sure you update it accordingly to help future rebases.
f21d79e7
 
 ### 3. cherry-pick upstream changes pushed to the Origin repo
 
eec32b40
 Eventually during the development cycle we introduce changes to dependencies
 right in the Origin
 repository. This is not a largely recommended practice, but it's useful if we
 need something that,
 for example, is in the Kubernetes repository but we are not doing a rebase yet.
 So, when doing the next
 rebase, we need to make sure we get all these changes otherwise they will be
 overridden by `godep save`.
 
 1. Check the `Godeps` directory [commits
 history](https://github.com/openshift/origin/commits/master/Godeps)
 for commits tagged with the *UPSTREAM* keyword. We will need to cherry-pick
 *all UPSTREAM commits since
 the last Kubernetes rebase* (remember you can find the last rebase commit
 looking for a message like
83c702b4
 `bump(k8s.io/kubernetes):...`).
f21d79e7
 2. For every commit tagged UPSTREAM, do `git cherry-pick <commit SHA>`.
eec32b40
 3. Notice that eventually the cherry-pick will be empty. This probably means
 the given change were
 already merged in Kubernetes and we don't need to specifically add it to our
 Godeps. Nice!
 4. Read over the commit history and make sure you have every UPSTREAM commit
 since the last rebase
f21d79e7
 (except only for the empty ones).
 
 ### 4. Refactor Origin to be compliant with upstream changes
 
eec32b40
 After making sure we have all the dependencies in place and up-to-date, we need
 to work in the Origin
 codebase to make sure the compilation is not broken, all tests pass and it's
 compliant with any refactorings, architectural changes or behavior changes
 introduced in Kubernetes. Make sure:
f21d79e7
 
eec32b40
 1. `make clean ; hack/build-go.sh` compiles without errors and the standalone
 server starts correctly.
 1. all of our generated code is up to date by running all `hack/update-*`
 scripts.
797bcb44
 1. `hack/verify-open-ports.sh` runs without errors.
eec32b40
 1. `hack/copy-kube-artifacts.sh` so Kubernetes tests can be fully functional.
 The diff resulting from this script should be squashed into the Kube bump
 commit.
a034c68a
 2. `TEST_KUBE=1 hack/test-go.sh` runs without errors.
f21d79e7
 3. `hack/test-cmd.sh` runs without errors.
 3. `hack/test-integration.sh` runs without errors.
46aa1236
 3. `hack/test-end-to-end.sh` runs without errors.
eec32b40
     See *Building a Release* above for setting up the environment for the
 *test-end-to-end.sh* tests.
f21d79e7
 
eec32b40
 It is helpful to look at the Kubernetes commit history to be aware of the major
 topics. Although it
 can potentially break or change any part of Origin, the most affected parts are
 usually:
f21d79e7
 
7beebe0a
 1. https://github.com/openshift/origin/blob/master/pkg/cmd/server/start
eec32b40
 2.
 https://github.com/openshift/origin/blob/master/pkg/cmd/server/kubernetes/master
 .go
 3.
 https://github.com/openshift/origin/blob/master/pkg/cmd/server/origin/master.go
 4.
 https://github.com/openshift/origin/blob/master/pkg/cmd/util/clientcmd/factory.g
 o
f21d79e7
 5. https://github.com/openshift/origin/blob/master/pkg/cmd/cli/cli.go
7beebe0a
 6. https://github.com/openshift/origin/blob/master/pkg/api/meta/meta.go
f21d79e7
 
 Place all your changes in a commit called "Refactor to match changes upstream".
 
 ### 5. Pull request
 
 A typical pull request for your Kubernetes rebase will contain:
 
eec32b40
 1. One commit for the Kuberentes Godeps bump (`bump(k8s.io/kubernetes):<commit
 SHA>`).
 2. Zero, one, or more bump commits for any **shared** dependencies between
 Origin and Kubernetes that have been bumped. Any transitive dependencies coming
 from Kubernetes should be squashed in the Kube bump commit.
797bcb44
 3. Zero, one, or more cherry-picked commits tagged UPSTREAM.
eec32b40
 4. One commit "Boring refactor to match changes upstream" that includes boring
 changes like imports rewriting, etc.
 5. One commit "Interesting refactor to match changes upstream" that includes
 interesting changes like new plugins or controller changes.
f21d79e7
 
 ## Updating other Godeps from upstream
f04c63ff
 
eec32b40
 To update to a new version of a dependency that's not already included in
 Kubernetes, checkout the correct version in your GOPATH and then run
 `godep save <pkgname>`.  This should create a new version of `Godeps/Godeps.json`,
 and update `vendor`.  Create a commit that includes both of these changes with message
 `bump(<pkgname>): <pkgcommit>`.
f04c63ff
 
df353fcc
 ## Updating external examples
 
eec32b40
 `hack/update-external-example.sh` will pull down example files from external
 repositories and deposit them under the `examples` directory.
 Run this script if you need to refresh an example file, or add a new one.  See
 the script and `examples/quickstarts/README.md` for more details.
df353fcc
 
da3b06d1
 ## Troubleshooting
 
46aa1236
 If you run into difficulties running OpenShift, start by reading through the
eec32b40
 [troubleshooting
 guide](https://github.com/openshift/origin/blob/master/docs/debugging-openshift.
 md).
fe6b7dbe
 
 ## RPM Packaging
 
 A specfile is included in this repo which can be used to produce RPMs including
 the openshift binary. While the specfile will be kept up to date with build
 requirements the version is not updated. You will need to either update the
 Version, %commit, and %ldflags values on your own or you may use
 [tito](https://github.com/dgoodwin/tito) to build
 and tag releases.
56df56a7
 
31f94ff8
 ## GSSAPI-enabled builds
 
eec32b40
 When built with GSSAPI support, the `oc` client supports logging in with
 Kerberos credentials on Linux and OS X.
 GSSAPI-enabled builds of `oc` cannot be cross-compiled, but must be built on
 the target platform with the GSSAPI header files available.
31f94ff8
 
 On Linux, ensure the `krb5-devel` package is installed:
 
     $ sudo yum install -y krb5-devel
 
 On OS X, you can obtain header files via Homebrew:
 
     $ brew install homebrew/dupes/heimdal --without-x11
 
 Once dependencies are in place, build with the `gssapi` tag:
 
     $ hack/build-go.sh cmd/oc -tags=gssapi
 
eec32b40
 Verify that the GSSAPI feature is enabled with `oc version`:
31f94ff8
 
     $ oc version
     ...
     features: Basic-Auth GSSAPI Kerberos SPNEGO
 
56df56a7
 ## Swagger API Documentation
 
eec32b40
 OpenShift and Kubernetes integrate with the [Swagger 2.0 API
 framework](http://swagger.io) which aims to make it easier to document and
 write clients for RESTful APIs.  When you start OpenShift, the Swagger API
 endpoint is exposed at `https://localhost:8443/swaggerapi`. The Swagger UI
 makes it easy to view your documentation - to view the docs for your local
 version of OpenShift start the server with CORS enabled:
56df56a7
 
     $ openshift start --cors-allowed-origins=.*
 
eec32b40
 and then browse to http://openshift3swagger-claytondev.rhcloud.com (which runs
 a copy of the Swagger UI that points to localhost:8080 by default).  Expand the
 operations available on v1 to see the schemas (and to try the API directly).
 Additionally, you can download swagger-ui from http://swagger.io/swagger-ui/
 and use it to point to your local swagger API endpoint.
56df56a7
 
eec32b40
 Note: Hosted API documentation can be found
 [here](http://docs.openshift.org/latest/rest_api/openshift_v1.html).
c91b857b
 
 
 ## Performance debugging
 
eec32b40
 OpenShift integrates the go `pprof` tooling to make it easy to capture CPU and
 heap dumps for running systems.  The following modes are available for the
 `openshift` binary (including all the CLI variants):
c91b857b
 
 * `OPENSHIFT_PROFILE` environment variable:
eec32b40
   * `cpu` - will start a CPU profile on startup and write `./cpu.pprof`.
 Contains samples for the entire run at the native sampling resolution (100hz).
 Note: CPU profiling for Go does not currently work on Mac OS X - the stats are
 not correctly sampled
   * `mem` - generate a running heap dump that tracks allocations to
 `./mem.pprof`
7628eda6
   * `block` -  will start a block wait time analysis and write `./block.pprof`
eec32b40
   * `web` - start the pprof webserver in process at http://127.0.0.1:6060/debug/pprof
 (you can open this in a browser). This supports `OPENSHIFT_PROFILE_HOST=`
 and `OPENSHIFT_PROFILE_PORT=` to change default ip `127.0.0.1` and default port `6060`.
c91b857b
 
97036f78
 In order to start the server in CPU profiling mode, run:
a57ea3ac
 
97036f78
     $ OPENSHIFT_PROFILE=cpu sudo ./_output/local/bin/linux/amd64/openshift start
c91b857b
 
eec32b40
 Or, if running OpenShift under systemd, append this to
 `/etc/sysconfig/atomic-openshift-{master,node}`
d8282817
 
     OPENSHIFT_PROFILE=cpu
 
eec32b40
 To view profiles, you use
 [pprof](http://goog-perftools.sourceforge.net/doc/cpu_profiler.html) which is
 part of `go tool`.  You must pass the binary you are debugging (for symbols)
 and a captured pprof.  For instance, to view a `cpu` profile from above, you
 would run OpenShift to completion, and then run:
c91b857b
 
97036f78
     $ go tool pprof ./_output/local/bin/linux/amd64/openshift cpu.pprof
d8282817
     or
7628eda6
     $ go tool pprof $(which openshift) /var/lib/origin/cpu.pprof
c91b857b
 
 This will open the `pprof` shell, and you can then run:
 
     # see the top 20 results
     (pprof) top20
 
     # see the top 50 results
     (pprof) top50
 
     # show the top20 sorted by cumulative time
     (pprof) cum=true
     (pprof) top20
 
 to see the top20 CPU consuming fields or
 
     (pprof) web
 
 to launch a web browser window showing you where CPU time is going.
 
eec32b40
 `pprof` supports CLI arguments for looking at profiles in different ways -
 memory profiles by default show allocated space:
c91b857b
 
97036f78
     $ go tool pprof ./_output/local/bin/linux/amd64/openshift mem.pprof
c91b857b
 
 but you can also see the allocated object counts:
 
eec32b40
     $ go tool pprof --alloc_objects ./_output/local/bin/linux/amd64/openshift
 mem.pprof
c91b857b
 
eec32b40
 Finally, when using the `web` profile mode, you can have the go tool directly
 fetch your profiles via HTTP:
c91b857b
 
     # for a 30s CPU trace
eec32b40
     $ go tool pprof ./_output/local/bin/linux/amd64/openshift
 http://127.0.0.1:6060/debug/pprof/profile
c91b857b
 
     # for a snapshot heap dump at the current time, showing total allocations
eec32b40
     $ go tool pprof --alloc_space ./_output/local/bin/linux/amd64/openshift
 http://127.0.0.1:6060/debug/pprof/heap
c91b857b
 
eec32b40
 See [debugging Go programs](https://golang.org/pkg/net/http/pprof/) for more
 info.  `pprof` has many modes and is very powerful (try `tree`) - you can pass
 a regex to many arguments to limit your results to only those samples that
 match the regex (basically the function name or the call stack).