|
...
|
...
|
@@ -3,10 +3,12 @@ Hacking on OpenShift
|
|
3
|
3
|
|
|
4
|
4
|
## Building a Release
|
|
5
|
5
|
|
|
6
|
|
-To build an OpenShift release you run the `hack/build-release.sh` script on a system with Docker, which
|
|
7
|
|
-will create a build environment image and then execute a cross platform Go build within it. The build
|
|
8
|
|
-output will be copied to `_output/releases` as a set of tars containing each version. It will also build
|
|
9
|
|
-the `openshift/origin-base` image which is the common parent image for all OpenShift Docker images.
|
|
|
6
|
+To build an OpenShift release you run the `hack/build-release.sh` script on a
|
|
|
7
|
+system with Docker, which will create a build environment image and then
|
|
|
8
|
+execute a cross platform Go build within it. The build output will be copied
|
|
|
9
|
+to `_output/releases` as a set of tars containing each version. It will also
|
|
|
10
|
+build the `openshift/origin-base` image which is the common parent image for all
|
|
|
11
|
+OpenShift Docker images.
|
|
10
|
12
|
|
|
11
|
13
|
$ make release
|
|
12
|
14
|
|
|
...
|
...
|
@@ -20,38 +22,46 @@ Once a release has been created, it can be pushed:
|
|
20
|
20
|
|
|
21
|
21
|
$ hack/push-release.sh
|
|
22
|
22
|
|
|
23
|
|
-To cut an official tag release, we generally use the images built by [ci.openshift.redhat.com](https://ci.openshift.redhat.com)
|
|
|
23
|
+To cut an official tag release, we generally use the images built by
|
|
|
24
|
+[ci.openshift.redhat.com](https://ci.openshift.redhat.com)
|
|
24
|
25
|
under the devenv_ami job.
|
|
25
|
26
|
|
|
26
|
27
|
1. Create a new git tag `git tag vX.X.X -a -m "vX.X.X" HEAD`
|
|
27
|
|
-2. Push the tag to GitHub `git push origin --tags` where `origin` is `github.com/openshift/origin.git`
|
|
|
28
|
+2. Push the tag to GitHub `git push origin --tags` where `origin` is
|
|
|
29
|
+`github.com/openshift/origin.git`
|
|
28
|
30
|
3. Run the "devenv_ami" job
|
|
29
|
|
-4. Once the images are pushed to the repository, run `OS_PUSH_TAG="vX.X.X" hack/push-release.sh`. Your tag must match the Git tag.
|
|
|
31
|
+4. Once the images are pushed to the repository, run `OS_PUSH_TAG="vX.X.X"
|
|
|
32
|
+hack/push-release.sh`. Your tag must match the Git tag.
|
|
30
|
33
|
5. Upload the binary artifacts generated by that build to GitHub release page
|
|
31
|
|
-6. Send an email to the dev list, including the important changes prior to the release.
|
|
|
34
|
+6. Send an email to the dev list, including the important changes prior to the
|
|
|
35
|
+release.
|
|
32
|
36
|
|
|
33
|
37
|
We generally cut a release before disruptive changes land.
|
|
34
|
38
|
|
|
35
|
39
|
|
|
36
|
40
|
## Test Suites
|
|
37
|
41
|
|
|
38
|
|
-OpenShift uses three levels of testing - unit tests, integration test, and end-to-end tests (much
|
|
39
|
|
-like Kubernetes).
|
|
|
42
|
+OpenShift uses three levels of testing - unit tests, integration test, and
|
|
|
43
|
+end-to-end tests (much like Kubernetes).
|
|
40
|
44
|
|
|
41
|
45
|
### Unit tests
|
|
42
|
46
|
|
|
43
|
|
-Unit tests follow standard Go conventions and are intended to test the behavior and output of a
|
|
44
|
|
-single package in isolation. All code is expected to be easily testable with mock interfaces and
|
|
45
|
|
-stubs, and when they are not it usually means that there's a missing interface or abstraction in the
|
|
46
|
|
-code. A unit test should focus on testing that branches and error conditions are properly returned
|
|
47
|
|
-and that the interface and code flows work as described. Unit tests can depend on other packages but
|
|
48
|
|
-should not depend on other components (an API test should not be writing to etcd).
|
|
|
47
|
+Unit tests follow standard Go conventions and are intended to test the behavior
|
|
|
48
|
+and output of a single package in isolation. All code is expected to be easily
|
|
|
49
|
+testable with mock interfaces and stubs, and when they are not it usually means
|
|
|
50
|
+that there's a missing interface or abstraction in the code. A unit test should
|
|
|
51
|
+focus on testing that branches and error conditions are properly returned and
|
|
|
52
|
+that the interface and code flows work as described. Unit tests can depend
|
|
|
53
|
+on other packages but should not depend on other components (an API test should
|
|
|
54
|
+not be writing to etcd).
|
|
49
|
55
|
|
|
50
|
|
-The unit tests for an entire package should not take more than 0.5s to run, and if they do, are
|
|
51
|
|
-probably not really unit tests or need to be rewritten to avoid sleeps or pauses. Coverage on a unit
|
|
52
|
|
-test should be above 70% unless the units are a special case.
|
|
|
56
|
+The unit tests for an entire package should not take more than 0.5s to run, and
|
|
|
57
|
+if they do, are probably not really unit tests or need to be rewritten to avoid
|
|
|
58
|
+sleeps or pauses. Coverage on a unit test should be above 70% unless the units
|
|
|
59
|
+are a special case.
|
|
53
|
60
|
|
|
54
|
|
-See `pkg/template/generator` for examples of unit tests. Unit tests should follow Go conventions.
|
|
|
61
|
+See `pkg/template/generator` for examples of unit tests. Unit tests should
|
|
|
62
|
+follow Go conventions.
|
|
55
|
63
|
|
|
56
|
64
|
Run the unit tests with:
|
|
57
|
65
|
|
|
...
|
...
|
@@ -77,7 +87,8 @@ To run all tests with verbose output:
|
|
77
|
77
|
|
|
78
|
78
|
$ hack/test-go.sh -v
|
|
79
|
79
|
|
|
80
|
|
-To change the timeout for individual unit tests, which defaults to one minute, use:
|
|
|
80
|
+To change the timeout for individual unit tests, which defaults to one minute,
|
|
|
81
|
+use:
|
|
81
|
82
|
|
|
82
|
83
|
$ TIMEOUT=<timeout> hack/test-go.sh
|
|
83
|
84
|
|
|
...
|
...
|
@@ -89,7 +100,8 @@ To run unit test for an individual kubernetes package:
|
|
89
|
89
|
|
|
90
|
90
|
$ hack/test-go.sh vendor/k8s.io/kubernetes/examples
|
|
91
|
91
|
|
|
92
|
|
-To change the coverage mode, which is `-cover -covermode=atomic` by default, use:
|
|
|
92
|
+To change the coverage mode, which is `-cover -covermode=atomic` by default,
|
|
|
93
|
+use:
|
|
93
|
94
|
|
|
94
|
95
|
$ COVERAGE_SPEC="<some coverage specification>" hack/test-go.sh
|
|
95
|
96
|
|
|
...
|
...
|
@@ -108,71 +120,84 @@ report should be stored. For example:
|
|
108
|
108
|
|
|
109
|
109
|
After that you can open `/path/to/dir/coverage.html` in the browser.
|
|
110
|
110
|
|
|
111
|
|
-To generate a jUnit XML report from the output of the tests, and see a summary of the test output
|
|
112
|
|
-instead of the full test output, use:
|
|
|
111
|
+To generate a jUnit XML report from the output of the tests, and see a summary
|
|
|
112
|
+of the test output instead of the full test output, use:
|
|
113
|
113
|
|
|
114
|
114
|
$ JUNIT_REPORT=true hack/test-go.sh
|
|
115
|
115
|
|
|
116
|
|
-`hack/test-go.sh` cannot generate jUnit XML and a coverage report for all packages at once. If you
|
|
117
|
|
-require both, you must call `hack/test-go.sh` twice.
|
|
|
116
|
+`hack/test-go.sh` cannot generate jUnit XML and a coverage report for all
|
|
|
117
|
+packages at once. If you require both, you must call `hack/test-go.sh` twice.
|
|
118
|
118
|
|
|
119
|
119
|
### Integration tests
|
|
120
|
120
|
|
|
121
|
|
-Integration tests cover multiple components acting together (generally, 2 or 3). These tests should
|
|
122
|
|
-focus on ensuring that naturally related components work correctly. They should not be extensively
|
|
123
|
|
-testing branches or error conditions inside packages (that's what unit tests do), but they should
|
|
124
|
|
-validate that important success and error paths work across layers (especially when errors are being
|
|
125
|
|
-converted from lower level errors). Integration tests should not be testing details of the
|
|
126
|
|
-inter-component connections - API tests should not test that the JSON serialized to the wire is
|
|
127
|
|
-correctly converted back and forth (unit test responsibility), but they should test that those
|
|
128
|
|
-connections have the expected outcomes. The underlying goal of integration tests is to wire together
|
|
129
|
|
-the most important components in isolation. Integration tests should be as fast as possible in order
|
|
130
|
|
-to enable them to be run repeatedly during testing. Integration tests that take longer than 0.5s
|
|
131
|
|
-are probably trying to test too much together and should be reorganized into separate tests.
|
|
132
|
|
-Integration tests should generally be written so that they are starting from a clean slate, but if
|
|
133
|
|
-that involves costly setup those components should be tested in isolation.
|
|
134
|
|
-
|
|
135
|
|
-We break integration tests into two categories, those that use Docker and those that do not. In
|
|
136
|
|
-general, high-level components that depend on the behavior of code running inside a Docker container
|
|
137
|
|
-should have at least one or two integration tests that test all the way down to Docker, but those
|
|
138
|
|
-should be part of their own test suite. Testing the API and high level API functions should
|
|
139
|
|
-generally not depend on calling into Docker. They are denoted by special test tags and should be in
|
|
140
|
|
-their own files so we can selectively build them.
|
|
141
|
|
-
|
|
142
|
|
-All integration tests are located under `test/integration/*`. All integration tests must set the
|
|
143
|
|
-`integration` build tag at the top of their source file, and also declare whether they need etcd
|
|
144
|
|
-with the `etcd` build tag and whether they need Docker with the `docker` build tag. For
|
|
145
|
|
-special function sets please create sub directories like `test/integration/deployimages`.
|
|
|
121
|
+Integration tests cover multiple components acting together (generally, 2 or
|
|
|
122
|
+3). These tests should focus on ensuring that naturally related components work
|
|
|
123
|
+correctly. They should not be extensively testing branches or error conditions
|
|
|
124
|
+inside packages (that's what unit tests do), but they should validate that
|
|
|
125
|
+important success and error paths work across layers (especially when errors
|
|
|
126
|
+are being converted from lower level errors). Integration tests should not be
|
|
|
127
|
+testing details of the inter-component connections - API tests should not test
|
|
|
128
|
+that the JSON serialized to the wire is correctly converted back and forth (unit test
|
|
|
129
|
+responsibility), but they should test that those connections have the expected
|
|
|
130
|
+outcomes. The underlying goal of integration tests is to wire together the most
|
|
|
131
|
+important components in isolation. Integration tests should be as fast as possible
|
|
|
132
|
+in order to enable them to be run repeatedly during testing. Integration tests
|
|
|
133
|
+that take longer than 0.5s are probably trying to test too much together and
|
|
|
134
|
+should be reorganized into separate tests. Integration tests should generally
|
|
|
135
|
+be written so that they are starting from a clean slate, but if that involves
|
|
|
136
|
+costly setup those components should be tested in isolation.
|
|
|
137
|
+
|
|
|
138
|
+We break integration tests into two categories, those that use Docker and those
|
|
|
139
|
+that do not. In general, high-level components that depend on the behavior of code
|
|
|
140
|
+running inside a Docker container should have at least one or two integration tests
|
|
|
141
|
+that test all the way down to Docker, but those should be part of their own
|
|
|
142
|
+test suite. Testing the API and high level API functions should generally
|
|
|
143
|
+not depend on calling into Docker. They are denoted by special test tags and
|
|
|
144
|
+should be in their own files so we can selectively build them.
|
|
|
145
|
+
|
|
|
146
|
+All integration tests are located under `test/integration/*`. All integration
|
|
|
147
|
+tests must set the `integration` build tag at the top of their source file,
|
|
|
148
|
+and also declare whether they need etcd with the `etcd` build tag and whether
|
|
|
149
|
+they need Docker with the `docker` build tag. For special function sets please
|
|
|
150
|
+create sub directories like `test/integration/deployimages`.
|
|
146
|
151
|
|
|
147
|
152
|
Run the integration tests with:
|
|
148
|
153
|
|
|
149
|
154
|
$ hack/test-integration.sh
|
|
150
|
155
|
|
|
151
|
|
-The script launches an instance of etcd and then invokes the integration tests. If you need to
|
|
152
|
|
-execute a subset of integration tests, run:
|
|
|
156
|
+The script launches an instance of etcd and then invokes the integration tests.
|
|
|
157
|
+If you need to execute a subset of integration tests, run:
|
|
153
|
158
|
|
|
154
|
159
|
$ hack/test-integration.sh <regex>
|
|
155
|
160
|
|
|
156
|
|
-Where `<regex>` is some regular expression that matches the names of all of the tests you want to run.
|
|
157
|
|
-The regular expression is passed into `grep -E`, so ensure that the syntax or features you use are supported.
|
|
158
|
|
-The default regular expression used is `Test`, which matches all tests.
|
|
|
161
|
+Where `<regex>` is some regular expression that matches the names of all
|
|
|
162
|
+of the tests you want to run. The regular expression is passed into `grep -E`,
|
|
|
163
|
+so ensure that the syntax or features you use are supported. The default
|
|
|
164
|
+regular expression used
|
|
|
165
|
+is `Test`, which matches all tests.
|
|
159
|
166
|
|
|
160
|
|
-Each integration function is executed in its own process so that it cleanly shuts down any background
|
|
161
|
|
-goroutines. You will not be able to run more than a single test within a single process.
|
|
|
167
|
+Each integration function is executed in its own process so that it cleanly
|
|
|
168
|
+shuts down any background
|
|
|
169
|
+goroutines. You will not be able to run more than a single test within a single
|
|
|
170
|
+process.
|
|
162
|
171
|
|
|
163
|
|
-There is a CLI integration test suite which covers general non-Docker functionality of the CLI tool
|
|
|
172
|
+There is a CLI integration test suite which covers general non-Docker
|
|
|
173
|
+functionality of the CLI tool
|
|
164
|
174
|
working against the API. Run it with:
|
|
165
|
175
|
|
|
166
|
176
|
$ hack/test-cmd.sh
|
|
167
|
177
|
|
|
168
|
|
-This suite comprises many smaller suites, which are found under `test/cmd` and can be run individually by
|
|
169
|
|
-specifying a regex filter, passed through `grep -E` like with integration tests above:
|
|
|
178
|
+This suite comprises many smaller suites, which are found under `test/cmd` and
|
|
|
179
|
+can be run individually by specifying a regex filter, passed through `grep -E`
|
|
|
180
|
+like with integration tests above:
|
|
170
|
181
|
|
|
171
|
182
|
$ hack/test-cmd.sh <regex>
|
|
172
|
183
|
|
|
173
|
184
|
During development, you can run a file `test/cmd/*.sh` directly to test against
|
|
174
|
|
-a running server. This can speed up the feedback loop considerably. All `test/cmd/*` tests are expected
|
|
175
|
|
-to be executable repeatedly - please file bugs if a test needs cleanup before running.
|
|
|
185
|
+a running server. This can speed up the feedback loop considerably. All
|
|
|
186
|
+`test/cmd/*` tests are expected
|
|
|
187
|
+to be executable repeatedly - please file bugs if a test needs cleanup before
|
|
|
188
|
+running.
|
|
176
|
189
|
|
|
177
|
190
|
For example, start the OpenShift server, create a "test" project, and then run
|
|
178
|
191
|
`oc new-app` tests against the server:
|
|
...
|
...
|
@@ -180,20 +205,24 @@ For example, start the OpenShift server, create a "test" project, and then run
|
|
180
|
180
|
$ oc new-project test
|
|
181
|
181
|
$ test/cmd/newapp.sh
|
|
182
|
182
|
|
|
183
|
|
-In order to run the suite, generate a jUnit XML report, and see a summary of the test suite, use:
|
|
|
183
|
+In order to run the suite, generate a jUnit XML report, and see a summary of
|
|
|
184
|
+the test suite, use:
|
|
184
|
185
|
|
|
185
|
186
|
$ JUNIT_REPORT='true' hack/test-cmd.sh
|
|
186
|
187
|
|
|
187
|
188
|
### End-to-End (e2e) and Extended Tests
|
|
188
|
189
|
|
|
189
|
|
-The final test category is end to end tests (e2e) which should verify a long set of flows in the
|
|
190
|
|
-product as a user would see them. Two e2e tests should not overlap more than 10% of function, and
|
|
191
|
|
-are not intended to test error conditions in detail. The project examples should be driven by e2e
|
|
192
|
|
-tests. e2e tests can also test external components working together.
|
|
|
190
|
+The final test category is end to end tests (e2e) which should verify a long
|
|
|
191
|
+set of flows in the product as a user would see them. Two e2e tests should not
|
|
|
192
|
+overlap more than 10% of function, and are not intended to test error conditions
|
|
|
193
|
+in detail. The project
|
|
|
194
|
+examples should be driven by e2e tests. e2e tests can also test external
|
|
|
195
|
+components working together.
|
|
193
|
196
|
|
|
194
|
|
-The end-to-end suite is currently implemented primarily in Bash, but will be folded into the extended
|
|
195
|
|
-suite (located in test/extended) over time. The extended suite is closer to the upstream Kubernetes
|
|
196
|
|
-e2e suite and tests the full behavior of a running system.
|
|
|
197
|
+The end-to-end suite is currently implemented primarily in Bash, but will be
|
|
|
198
|
+folded into the extended suite (located in test/extended) over time.
|
|
|
199
|
+The extended suite is closer to the upstream Kubernetes e2e suite and
|
|
|
200
|
+tests the full behavior of a running system.
|
|
197
|
201
|
|
|
198
|
202
|
Run the end to end tests with:
|
|
199
|
203
|
|
|
...
|
...
|
@@ -203,25 +232,30 @@ Run the extended tests with:
|
|
203
|
203
|
|
|
204
|
204
|
$ test/extended/core.sh
|
|
205
|
205
|
|
|
206
|
|
-This suite comprises many smaller suites, which are found under `test/extended` and can be run individually by
|
|
207
|
|
-specifying `--ginkgo.focus` and a regex filter:
|
|
|
206
|
+This suite comprises many smaller suites, which are found under `test/extended`
|
|
|
207
|
+and can be run individually by specifying `--ginkgo.focus` and a regex filter:
|
|
208
|
208
|
|
|
209
|
209
|
$ test/extended/core.sh --ginkgo.focus=<regex>
|
|
210
|
210
|
|
|
211
|
|
-Extended tests should be Go tests in the `test/extended` directory that use the Ginkgo library. They
|
|
212
|
|
-must be able to be run remotely, and cannot depend on any local interaction with the filesystem or
|
|
213
|
|
-Docker.
|
|
|
211
|
+Extended tests should be Go tests in the `test/extended` directory that use
|
|
|
212
|
+the Ginkgo library. They must be able to be run remotely, and cannot depend on
|
|
|
213
|
+any local interaction with the filesystem or Docker.
|
|
214
|
214
|
|
|
215
|
215
|
More information about running extended tests can be found in
|
|
216
|
|
-[test/extended/README](https://github.com/openshift/origin/blob/master/test/extended/README.md).
|
|
|
216
|
+[test/extended/README](https://github.com/openshift/origin/blob/master/test/exte
|
|
|
217
|
+nded/README.md).
|
|
217
|
218
|
|
|
218
|
219
|
|
|
219
|
220
|
## Installing Godep
|
|
220
|
221
|
|
|
221
|
|
-OpenShift and Kubernetes use [Godep](https://github.com/tools/godep) for dependency management.
|
|
222
|
|
-Godep allows versions of dependent packages to be locked at a specific commit by *vendoring* them
|
|
223
|
|
-(checking a copy of them into `vendor/`). This means that everything you need for
|
|
224
|
|
-OpenShift is checked into this repository. To install `godep` locally run:
|
|
|
222
|
+OpenShift and Kubernetes use [Godep](https://github.com/tools/godep) for
|
|
|
223
|
+dependency management. Godep allows versions of dependent packages to be
|
|
|
224
|
+locked at a specific commit by *vendoring* them (checking a copy of them into
|
|
|
225
|
+`vendor/`).
|
|
|
226
|
+This means that everything you need for OpenShift is checked into this
|
|
|
227
|
+repository.
|
|
|
228
|
+
|
|
|
229
|
+To install `godep` locally run:
|
|
225
|
230
|
|
|
226
|
231
|
$ go get github.com/tools/godep
|
|
227
|
232
|
|
|
...
|
...
|
@@ -229,53 +263,67 @@ If you are not updating packages you should not need godep installed.
|
|
229
|
229
|
|
|
230
|
230
|
## Cherry-picking an upstream commit into Origin: Why, how, and when.
|
|
231
|
231
|
|
|
232
|
|
-Origin carries patches inside of vendor/ on top of each rebase. Thus, origin carries upstream patches in two ways.
|
|
|
232
|
+Origin carries patches inside of vendor/ on top of each rebase.
|
|
|
233
|
+Thus, origin carries upstream patches in two ways.
|
|
233
|
234
|
|
|
234
|
235
|
1. *periodic rebases* against a Kubernetes commit.
|
|
235
|
|
-Eventually, any code you have in upstream kubernetes will land in Openshift via this mechanism.
|
|
|
236
|
+Eventually, any code you have in upstream kubernetes will land in Openshift
|
|
|
237
|
+via this mechanism.
|
|
236
|
238
|
|
|
237
|
|
-2. Cherry-picked patches for important *bug fixes*. We really try to limit feature back-porting entirely.
|
|
|
239
|
+2. Cherry-picked patches for important *bug fixes*. We really try to
|
|
|
240
|
+limit feature back-porting entirely.
|
|
238
|
241
|
|
|
239
|
242
|
### Manually
|
|
240
|
243
|
|
|
241
|
|
-You can manually try to cherry pick a commit (by using git apply). This can easily be done in a couple of steps.
|
|
|
244
|
+You can manually try to cherry pick a commit (by using git apply). This can
|
|
|
245
|
+easily be done in a couple of steps.
|
|
242
|
246
|
|
|
243
|
|
-- wget the patch, i.e. `wget -O /tmp/mypatch https://github.com/kubernetes/kubernetes/pull/34624.patch`
|
|
|
247
|
+- wget the patch, i.e. `wget -O /tmp/mypatch
|
|
|
248
|
+https://github.com/kubernetes/kubernetes/pull/34624.patch`
|
|
244
|
249
|
- PATCH=/tmp/mypatch git apply --directory vendor/k8s.io/kubernetes $PATCH
|
|
245
|
250
|
|
|
246
|
251
|
If this fails, then it's possible you may need to pick multiple commits.
|
|
247
|
252
|
|
|
248
|
|
-### For Openshift newcomers: Pick my kubernetes fix into Openshift vs. wait for the next rebase?
|
|
|
253
|
+### For Openshift newcomers: Pick my kubernetes fix into Openshift vs. wait for
|
|
|
254
|
+the next rebase?
|
|
249
|
255
|
|
|
250
|
|
-Assuming you read the bullets above... If your patch is really far behind, for example, if there have been 5 commits
|
|
251
|
|
-modifying the directory you care about, cherry picking will be increasingly difficult and you should consider waiting
|
|
252
|
|
-for the next rebase, which will likely include the commit you care about, or at least decrease the amount of cherry picks
|
|
253
|
|
-you need to do to merge.
|
|
|
256
|
+Assuming you read the bullets above... If your patch is really far behind, for
|
|
|
257
|
+example, if there have been 5 commits modifying the directory you care about,
|
|
|
258
|
+cherry picking will be increasingly difficult and you should consider waiting
|
|
|
259
|
+for the next rebase, which will likely include the commit you care about, or at
|
|
|
260
|
+least decrease the amount of cherry picks you need to do to merge.
|
|
254
|
261
|
|
|
255
|
|
-To really know the answer, you need to know *how many commits behind you are in a particular directory*, often.
|
|
|
262
|
+To really know the answer, you need to know *how many commits behind you are in
|
|
|
263
|
+a particular directory*, often.
|
|
256
|
264
|
|
|
257
|
265
|
To do this, just use git log, like so (using pkg/scheduler/ as an example).
|
|
258
|
266
|
|
|
259
|
267
|
```
|
|
260
|
|
-MYDIR=pkg/scheduler/algorithm git log --oneline -- vendor/k8s.io/kubernetes/${MYDIR} | grep UPSTREAM | cut -d' ' -f 4-10 | head -1
|
|
|
268
|
+MYDIR=pkg/scheduler/algorithm git log --oneline --
|
|
|
269
|
+vendor/k8s.io/kubernetes/${MYDIR} | grep UPSTREAM | cut -d' ' -f 4-10 | head -1
|
|
261
|
270
|
```
|
|
262
|
271
|
|
|
263
|
272
|
The commit message printed above will tell you:
|
|
264
|
|
-- what the LAST commit in Kubernetes was (which effected "/pkg/scheduler/algorithm")
|
|
265
|
|
-- directory, which will give you an intuition about how "hot" the code you are cherry picking is.
|
|
266
|
|
-If it has changed a lot, recently, then
|
|
267
|
|
-that means you probably will want to wait for a rebase to land.
|
|
|
273
|
+
|
|
|
274
|
+- what the LAST commit in Kubernetes was (which effected
|
|
|
275
|
+"/pkg/scheduler/algorithm")
|
|
|
276
|
+- directory, which will give you an intuition about how "hot" the code you are
|
|
|
277
|
+cherry picking is. If it has changed a lot, recently, then that means you
|
|
|
278
|
+probably will want to wait for a rebase to land.
|
|
268
|
279
|
|
|
269
|
280
|
### Using hack/cherry-pick
|
|
270
|
281
|
|
|
271
|
|
-For convenience, you can use `hack/cherry-pick.sh` to generate patches for Origin from upstream commits.
|
|
|
282
|
+For convenience, you can use `hack/cherry-pick.sh` to generate patches for
|
|
|
283
|
+Origin from upstream commits.
|
|
272
|
284
|
|
|
273
|
|
-The purpose of this command is to allow you to pull individual commits from a local kubernetes repository
|
|
274
|
|
-into origin's vendored kuberenetes in a fully automated manner.
|
|
|
285
|
+The purpose of this command is to allow you to pull individual commits from a
|
|
|
286
|
+local kubernetes repository into origin's vendored kuberenetes in a fully
|
|
|
287
|
+automated manner.
|
|
275
|
288
|
|
|
276
|
|
-To use this command, be sure to setup remote Pull Request branches in the kubernetes repository you are using
|
|
277
|
|
-(i.e. like https://gist.github.com/piscisaureus/3342247). Specifically, you will be doing this, to the git config
|
|
278
|
|
-you probably already have for kubernetes:
|
|
|
289
|
+To use this command, be sure to setup remote Pull Request branches in the
|
|
|
290
|
+kubernetes repository you are using (i.e. like https://gist.github.com/piscisaureus/3342247).
|
|
|
291
|
+Specifically, you will be doing this, to the git config you probably already
|
|
|
292
|
+have for kubernetes:
|
|
279
|
293
|
|
|
280
|
294
|
```
|
|
281
|
295
|
[remote "origin"]
|
|
...
|
...
|
@@ -285,8 +333,11 @@ you probably already have for kubernetes:
|
|
285
|
285
|
fetch = +refs/pull/*/head:refs/remotes/origin/pr/*
|
|
286
|
286
|
```
|
|
287
|
287
|
|
|
288
|
|
-so that `git show origin/pr/<number>` displays information about your branch after a `git fetch`.
|
|
289
|
|
-You must also have the Kubernetes repository checked out in your GOPATH (visible as `../../../k8s.io/kubernetes`),
|
|
|
288
|
+so that `git show origin/pr/<number>` displays information about your branch
|
|
|
289
|
+after a `git fetch`.
|
|
|
290
|
+
|
|
|
291
|
+You must also have the Kubernetes repository checked out in your GOPATH
|
|
|
292
|
+(visible as `../../../k8s.io/kubernetes`),
|
|
290
|
293
|
with openshift/kubernetes as a remote and fetched:
|
|
291
|
294
|
|
|
292
|
295
|
$ pushd $GOPATH/src/k8s.io/kubernetes
|
|
...
|
...
|
@@ -300,13 +351,14 @@ To pull an upstream commit, run:
|
|
300
|
300
|
|
|
301
|
301
|
$ hack/cherry-pick.sh <pr_number>
|
|
302
|
302
|
|
|
303
|
|
-This will attempt to create a patch from the current Kube rebase version in Origin that contains
|
|
304
|
|
-the commits added in the PR. If the PR has already been merged to the Kube version, you'll get an
|
|
305
|
|
-error. If there are conflicts, you'll have to resolve them in the upstream repo, then hit ENTER
|
|
306
|
|
-to continue. The end result will be a single commit in your Origin repo that contains the changes.
|
|
|
303
|
+This will attempt to create a patch from the current Kube rebase version in
|
|
|
304
|
+Origin that contains the commits added in the PR. If the PR has already been
|
|
|
305
|
+merged to the Kube version, you'll get an error. If there are conflicts, you'll
|
|
|
306
|
+have to resolve them in the upstream repo, then hit ENTER to continue. The end
|
|
|
307
|
+result will be a single commit in your Origin repo that contains the changes.
|
|
307
|
308
|
|
|
308
|
|
-If you want to run without a rebase option, set `NO_REBASE=1` before the command is run. You can
|
|
309
|
|
-also specify a commit range directly with:
|
|
|
309
|
+If you want to run without a rebase option, set `NO_REBASE=1` before the
|
|
|
310
|
+command is run. You can also specify a commit range directly with:
|
|
310
|
311
|
|
|
311
|
312
|
$ hack/cherry-pick.sh origin/master...<some_branch>
|
|
312
|
313
|
|
|
...
|
...
|
@@ -314,29 +366,35 @@ All upstream commits should have a commit message where the first line is:
|
|
314
|
314
|
|
|
315
|
315
|
UPSTREAM: <PR number|drop|carry>: <short description>
|
|
316
|
316
|
|
|
317
|
|
-`drop` indicates the commit should be removed during the next rebase. `carry` means that the change
|
|
318
|
|
-cannot go into upstream, and we should continue to use it during the next rebase.
|
|
|
317
|
+`drop` indicates the commit should be removed during the next rebase. `carry`
|
|
|
318
|
+means that the change cannot go into upstream, and we should continue to use it
|
|
|
319
|
+during the next rebase.
|
|
319
|
320
|
|
|
320
|
|
-You can also target repositories other than Kube by setting `UPSTREAM_REPO` and `UPSTREAM_PACKAGE`
|
|
321
|
|
-env vars. `UPSTREAM_REPO` should be the full name of the Git repo as Go sees it, i.e.
|
|
322
|
|
-`github.com/coreos/etcd`, and `UPSTREAM_PACKAGE` must be a package inside that repo that is
|
|
323
|
|
-currently part of the Godeps.json file. Example:
|
|
|
321
|
+You can also target repositories other than Kube by setting `UPSTREAM_REPO` and
|
|
|
322
|
+`UPSTREAM_PACKAGE` env vars. `UPSTREAM_REPO` should be the full name of the Git
|
|
|
323
|
+repo as Go sees it, i.e. `github.com/coreos/etcd`, and `UPSTREAM_PACKAGE` must be
|
|
|
324
|
+a package inside that repo that is currently part of the Godeps.json file. Example:
|
|
324
|
325
|
|
|
325
|
|
- $ UPSTREAM_REPO=github.com/coreos/etcd UPSTREAM_PACKAGE=store hack/cherry-pick.sh <pr_number>
|
|
|
326
|
+ $ UPSTREAM_REPO=github.com/coreos/etcd UPSTREAM_PACKAGE=store
|
|
|
327
|
+hack/cherry-pick.sh <pr_number>
|
|
326
|
328
|
|
|
327
|
|
-By default `hack/cherry-pick.sh` uses git remote named `origin` to fetch kubernetes repository,
|
|
328
|
|
-if your git configuration is different, you can pass the git remote name by setting `UPSTREAM_REMOTE` env var:
|
|
|
329
|
+By default `hack/cherry-pick.sh` uses git remote named `origin` to fetch
|
|
|
330
|
+kubernetes repository, if your git configuration is different, you can pass the git
|
|
|
331
|
+remote name by setting `UPSTREAM_REMOTE` env var:
|
|
329
|
332
|
|
|
330
|
333
|
$ UPSTREAM_REMOTE=upstream hack/cherry-pick.sh <pr_number>
|
|
331
|
334
|
|
|
332
|
335
|
## Moving a commit you developed in Origin to an upstream
|
|
333
|
336
|
|
|
334
|
|
-The `hack/move-upstream.sh` script takes the current feature branch, finds any changes to the
|
|
335
|
|
-requested upstream project (as defined by `UPSTREAM_REPO` and `UPSTREAM_PACKAGE`) that differ
|
|
336
|
|
-from `origin/master`, and then creates a new commit in that upstream project on a branch with
|
|
337
|
|
-the same name as your current branch.
|
|
|
337
|
+The `hack/move-upstream.sh` script takes the current feature branch, finds any
|
|
|
338
|
+changes to the
|
|
|
339
|
+requested upstream project (as defined by `UPSTREAM_REPO` and
|
|
|
340
|
+`UPSTREAM_PACKAGE`) that differ from `origin/master`, and then creates a new
|
|
|
341
|
+commit in that upstream project on a branch with the same name as your current
|
|
|
342
|
+branch.
|
|
338
|
343
|
|
|
339
|
|
-For example, to upstream a commit to OpenShift source-to-image while working from Origin:
|
|
|
344
|
+For example, to upstream a commit to OpenShift source-to-image while working
|
|
|
345
|
+from Origin:
|
|
340
|
346
|
|
|
341
|
347
|
$ git checkout my_feature_branch_in_origin
|
|
342
|
348
|
$ git log --oneline
|
|
...
|
...
|
@@ -345,10 +403,12 @@ For example, to upstream a commit to OpenShift source-to-image while working fro
|
|
345
|
345
|
86eefdd UPSTREAM: 14618: Refactor exec to allow reuse from server
|
|
346
|
346
|
|
|
347
|
347
|
# we want to move our STI changes to upstream
|
|
348
|
|
- $ UPSTREAM_REPO=github.com/openshift/source-to-image UPSTREAM_PACKAGE=pkg/api hack/move-upstream.sh
|
|
|
348
|
+ $ UPSTREAM_REPO=github.com/openshift/source-to-image
|
|
|
349
|
+UPSTREAM_PACKAGE=pkg/api hack/move-upstream.sh
|
|
349
|
350
|
...
|
|
350
|
351
|
|
|
351
|
|
- # All changes to source-to-image in Godeps/. are now in a commit UPSTREAMED in s2i repo
|
|
|
352
|
+ # All changes to source-to-image in Godeps/. are now in a commit UPSTREAMED
|
|
|
353
|
+in s2i repo
|
|
352
|
354
|
|
|
353
|
355
|
$ cd ../source-to-image
|
|
354
|
356
|
$ git log --oneline
|
|
...
|
...
|
@@ -356,19 +416,24 @@ For example, to upstream a commit to OpenShift source-to-image while working fro
|
|
356
|
356
|
... # older commits
|
|
357
|
357
|
|
|
358
|
358
|
The default is to work against Kube.
|
|
359
|
|
-
|
|
|
359
|
+go
|
|
360
|
360
|
|
|
361
|
361
|
## Updating Kubernetes from upstream
|
|
362
|
362
|
|
|
363
|
|
-There are a few steps involved in rebasing Origin to a new version of Kubernetes. We need to make sure
|
|
364
|
|
-that not only the Kubernetes packages were updated correctly into `Godeps`, but also that *all tests are
|
|
365
|
|
-still running without errors* and *code changes, refactorings or the inclusion/removal of attributes
|
|
|
363
|
+There are a few steps involved in rebasing Origin to a new version of
|
|
|
364
|
+Kubernetes. We need to make sure
|
|
|
365
|
+that not only the Kubernetes packages were updated correctly into `Godeps`, but
|
|
|
366
|
+also that *all tests are
|
|
|
367
|
+still running without errors* and *code changes, refactorings or the
|
|
|
368
|
+inclusion/removal of attributes
|
|
366
|
369
|
were properly reflected* in the Origin codebase.
|
|
367
|
370
|
|
|
368
|
371
|
### 1. Preparation
|
|
369
|
372
|
|
|
370
|
|
-Before you begin, make sure you have both [openshift/origin](https://github.com/openshift/origin) and
|
|
371
|
|
-[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) in your $GOPATH. You may want to work
|
|
|
373
|
+Before you begin, make sure you have both
|
|
|
374
|
+[openshift/origin](https://github.com/openshift/origin) and
|
|
|
375
|
+[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes) in your
|
|
|
376
|
+$GOPATH. You may want to work
|
|
372
|
377
|
on a separate $GOPATH just for the rebase:
|
|
373
|
378
|
|
|
374
|
379
|
```
|
|
...
|
...
|
@@ -384,7 +449,8 @@ $ git remote add openshift git@github.com:openshift/kubernetes.git
|
|
384
|
384
|
$ git fetch openshift
|
|
385
|
385
|
```
|
|
386
|
386
|
|
|
387
|
|
-Check out the version of Kubernetes you want to rebase as a branch or tag named `stable_proposed` in
|
|
|
387
|
+Check out the version of Kubernetes you want to rebase as a branch or tag named
|
|
|
388
|
+`stable_proposed` in
|
|
388
|
389
|
[kubernetes/kubernetes](https://github.com/kubernetes/kubernetes). For example,
|
|
389
|
390
|
if you are going to rebase the latest `master` of Kubernetes:
|
|
390
|
391
|
|
|
...
|
...
|
@@ -399,93 +465,126 @@ $ git checkout -b stable_proposed
|
|
399
|
399
|
|
|
400
|
400
|
#### 2.1. First option (preferred): using the rebase-kube.sh script
|
|
401
|
401
|
|
|
402
|
|
-If all requirements described in *Preparation* were correctly attended, you should not have any trouble
|
|
403
|
|
-with rebasing the Kubernetes code using the script that automates this process.
|
|
|
402
|
+If all requirements described in *Preparation* were correctly attended, you
|
|
|
403
|
+should not have any trouble with rebasing the Kubernetes code using the script
|
|
|
404
|
+that automates this process.
|
|
404
|
405
|
|
|
405
|
406
|
```
|
|
406
|
407
|
$ cd $GOPATH/src/github.com/openshift/origin
|
|
407
|
408
|
$ hack/rebase-kube.sh
|
|
408
|
409
|
```
|
|
409
|
410
|
|
|
410
|
|
-Read over the changes with `git status` and make sure it looks reasonable. Check specially the
|
|
411
|
|
-`Godeps/Godeps.json` file to make sure no dependency is unintentionally missing.
|
|
|
411
|
+Read over the changes with `git status` and make sure it looks reasonable.
|
|
|
412
|
+Check specially the `Godeps/Godeps.json` file to make sure no dependency
|
|
|
413
|
+is unintentionally missing.
|
|
412
|
414
|
|
|
413
|
415
|
Commit using the message `bump(k8s.io/kubernetes):<commit SHA>`, where
|
|
414
|
|
-`<commit SHA>` is the commit id for the Kubernetes version we are including in our Godeps. It can be
|
|
415
|
|
-found in our `Godeps/Godeps.json` in the declaration of any Kubernetes package.
|
|
|
416
|
+`<commit SHA>` is the commit id for the Kubernetes version we are including in
|
|
|
417
|
+our Godeps. It can be found in our `Godeps/Godeps.json` in the declaration of
|
|
|
418
|
+any Kubernetes package.
|
|
416
|
419
|
|
|
417
|
420
|
#### 2.2. Second option: manually
|
|
418
|
421
|
|
|
419
|
|
-If for any reason you had trouble rebasing using the script, you may need to to do it manually.
|
|
420
|
|
-After following all requirements described in the *Preparation* topic, you will need to run
|
|
421
|
|
-`godep restore` from both the Origin and the Kubernetes directories and then `godep save ./...`
|
|
|
422
|
+If for any reason you had trouble rebasing using the script, you may need to to
|
|
|
423
|
+do it manually.
|
|
|
424
|
+After following all requirements described in the *Preparation* topic, you will
|
|
|
425
|
+need to run
|
|
|
426
|
+`godep restore` from both the Origin and the Kubernetes directories and then
|
|
|
427
|
+`godep save ./...`
|
|
422
|
428
|
from the Origin directory. Follow these steps:
|
|
423
|
429
|
|
|
424
|
430
|
1. `$ cd $GOPATH/src/github.com/openshift/origin`
|
|
425
|
|
-2. `make clean ; godep restore` will restore the package versions specified in the `Godeps/Godeps.json`
|
|
426
|
|
-of Origin to your GOPATH.
|
|
|
431
|
+2. `make clean ; godep restore` will restore the package versions specified in
|
|
|
432
|
+the `Godeps/Godeps.json` of Origin to your GOPATH.
|
|
427
|
433
|
2. `$ cd $GOPATH/src/k8s.io/kubernetes`
|
|
428
|
|
-3. `$ git checkout stable_proposed` will checkout the desired version of Kubernetes as branched in
|
|
429
|
|
-*Preparation*.
|
|
430
|
|
-4. `$ godep restore` will restore the package versions specified in the `Godeps/Godeps.json`
|
|
431
|
|
-of Kubernetes to your GOPATH.
|
|
|
434
|
+3. `$ git checkout stable_proposed` will checkout the desired version of
|
|
|
435
|
+Kubernetes as branched in *Preparation*.
|
|
|
436
|
+4. `$ godep restore` will restore the package versions specified in the
|
|
|
437
|
+`Godeps/Godeps.json` of Kubernetes to your GOPATH.
|
|
432
|
438
|
5. `$ cd $GOPATH/src/github.com/openshift/origin`.
|
|
433
|
|
-6. `$ make clean ; godep save ./...` will save a list of the checked-out dependencies to the file
|
|
434
|
|
-`Godeps/Godeps.json`, and copy their source code into `vendor`.
|
|
435
|
|
-7. If in the previous step godep complaints about the checked out revision of a package being different
|
|
436
|
|
-than the wanted revision, this probably means there are new packages in Kubernetes that we need to add.
|
|
437
|
|
-Do a `godep save <pkgname>` with the package specified by the error message and then `$ godep save ./...`
|
|
|
439
|
+6. `$ make clean ; godep save ./...` will save a list of the checked-out
|
|
|
440
|
+dependencies to the file `Godeps/Godeps.json`, and copy their source code into `vendor`.
|
|
|
441
|
+7. If in the previous step godep complaints about the checked out revision of a
|
|
|
442
|
+package being different than the wanted revision, this probably means there are
|
|
|
443
|
+new packages in Kubernetes that we need to add. Do a `godep save <pkgname>` with
|
|
|
444
|
+the package specified by the error message and then `$ godep save ./...`
|
|
438
|
445
|
again.
|
|
439
|
|
-8. Read over the changes with `git status` and make sure it looks reasonable. Check specially the
|
|
440
|
|
-`Godeps/Godeps.json` file to make sure no dependency is unintentionally missing. The whole Godeps
|
|
441
|
|
-directory will be added to version control, including `_workspace`.
|
|
|
446
|
+8. Read over the changes with `git status` and make sure it looks reasonable.
|
|
|
447
|
+Check specially the `Godeps/Godeps.json` file to make sure no dependency is
|
|
|
448
|
+unintentionally missing. The whole Godeps directory will be added to version control, including
|
|
|
449
|
+`_workspace`.
|
|
442
|
450
|
9. Commit using the message `bump(k8s.io/kubernetes):<commit SHA>`, where
|
|
443
|
|
-`<commit SHA>` is the commit id for the Kubernetes version we are including in our Godeps. It can be
|
|
444
|
|
-found in our `Godeps/Godeps.json` in the declaration of any Kubernetes package.
|
|
|
451
|
+`<commit SHA>` is the commit id for the Kubernetes version we are including in
|
|
|
452
|
+our Godeps. It can be found in our `Godeps/Godeps.json` in the declaration of
|
|
|
453
|
+any Kubernetes package.
|
|
445
|
454
|
|
|
446
|
|
-If in the process of rebasing manually you found any corner case not attended by the `hack/rebase-kube.sh`
|
|
447
|
|
-script, make sure you update it accordingly to help future rebases.
|
|
|
455
|
+If in the process of rebasing manually you found any corner case not attended
|
|
|
456
|
+by the `hack/rebase-kube.sh` script, make sure you update it accordingly to help future rebases.
|
|
448
|
457
|
|
|
449
|
458
|
### 3. cherry-pick upstream changes pushed to the Origin repo
|
|
450
|
459
|
|
|
451
|
|
-Eventually during the development cycle we introduce changes to dependencies right in the Origin
|
|
452
|
|
-repository. This is not a largely recommended practice, but it's useful if we need something that,
|
|
453
|
|
-for example, is in the Kubernetes repository but we are not doing a rebase yet. So, when doing the next
|
|
454
|
|
-rebase, we need to make sure we get all these changes otherwise they will be overridden by `godep save`.
|
|
455
|
|
-
|
|
456
|
|
-1. Check the `Godeps` directory [commits history](https://github.com/openshift/origin/commits/master/Godeps)
|
|
457
|
|
-for commits tagged with the *UPSTREAM* keyword. We will need to cherry-pick *all UPSTREAM commits since
|
|
458
|
|
-the last Kubernetes rebase* (remember you can find the last rebase commit looking for a message like
|
|
|
460
|
+Eventually during the development cycle we introduce changes to dependencies
|
|
|
461
|
+right in the Origin
|
|
|
462
|
+repository. This is not a largely recommended practice, but it's useful if we
|
|
|
463
|
+need something that,
|
|
|
464
|
+for example, is in the Kubernetes repository but we are not doing a rebase yet.
|
|
|
465
|
+So, when doing the next
|
|
|
466
|
+rebase, we need to make sure we get all these changes otherwise they will be
|
|
|
467
|
+overridden by `godep save`.
|
|
|
468
|
+
|
|
|
469
|
+1. Check the `Godeps` directory [commits
|
|
|
470
|
+history](https://github.com/openshift/origin/commits/master/Godeps)
|
|
|
471
|
+for commits tagged with the *UPSTREAM* keyword. We will need to cherry-pick
|
|
|
472
|
+*all UPSTREAM commits since
|
|
|
473
|
+the last Kubernetes rebase* (remember you can find the last rebase commit
|
|
|
474
|
+looking for a message like
|
|
459
|
475
|
`bump(k8s.io/kubernetes):...`).
|
|
460
|
476
|
2. For every commit tagged UPSTREAM, do `git cherry-pick <commit SHA>`.
|
|
461
|
|
-3. Notice that eventually the cherry-pick will be empty. This probably means the given change were
|
|
462
|
|
-already merged in Kubernetes and we don't need to specifically add it to our Godeps. Nice!
|
|
463
|
|
-4. Read over the commit history and make sure you have every UPSTREAM commit since the last rebase
|
|
|
477
|
+3. Notice that eventually the cherry-pick will be empty. This probably means
|
|
|
478
|
+the given change were
|
|
|
479
|
+already merged in Kubernetes and we don't need to specifically add it to our
|
|
|
480
|
+Godeps. Nice!
|
|
|
481
|
+4. Read over the commit history and make sure you have every UPSTREAM commit
|
|
|
482
|
+since the last rebase
|
|
464
|
483
|
(except only for the empty ones).
|
|
465
|
484
|
|
|
466
|
485
|
### 4. Refactor Origin to be compliant with upstream changes
|
|
467
|
486
|
|
|
468
|
|
-After making sure we have all the dependencies in place and up-to-date, we need to work in the Origin
|
|
469
|
|
-codebase to make sure the compilation is not broken, all tests pass and it's compliant with any
|
|
470
|
|
-refactorings, architectural changes or behavior changes introduced in Kubernetes. Make sure:
|
|
|
487
|
+After making sure we have all the dependencies in place and up-to-date, we need
|
|
|
488
|
+to work in the Origin
|
|
|
489
|
+codebase to make sure the compilation is not broken, all tests pass and it's
|
|
|
490
|
+compliant with any refactorings, architectural changes or behavior changes
|
|
|
491
|
+introduced in Kubernetes. Make sure:
|
|
471
|
492
|
|
|
472
|
|
-1. `make clean ; hack/build-go.sh` compiles without errors and the standalone server starts correctly.
|
|
473
|
|
-1. all of our generated code is up to date by running all `hack/update-*` scripts.
|
|
|
493
|
+1. `make clean ; hack/build-go.sh` compiles without errors and the standalone
|
|
|
494
|
+server starts correctly.
|
|
|
495
|
+1. all of our generated code is up to date by running all `hack/update-*`
|
|
|
496
|
+scripts.
|
|
474
|
497
|
1. `hack/verify-open-ports.sh` runs without errors.
|
|
475
|
|
-1. `hack/copy-kube-artifacts.sh` so Kubernetes tests can be fully functional. The diff resulting from this script should be squashed into the Kube bump commit.
|
|
|
498
|
+1. `hack/copy-kube-artifacts.sh` so Kubernetes tests can be fully functional.
|
|
|
499
|
+The diff resulting from this script should be squashed into the Kube bump
|
|
|
500
|
+commit.
|
|
476
|
501
|
2. `TEST_KUBE=1 hack/test-go.sh` runs without errors.
|
|
477
|
502
|
3. `hack/test-cmd.sh` runs without errors.
|
|
478
|
503
|
3. `hack/test-integration.sh` runs without errors.
|
|
479
|
504
|
3. `hack/test-end-to-end.sh` runs without errors.
|
|
480
|
|
- See *Building a Release* above for setting up the environment for the *test-end-to-end.sh* tests.
|
|
|
505
|
+ See *Building a Release* above for setting up the environment for the
|
|
|
506
|
+*test-end-to-end.sh* tests.
|
|
481
|
507
|
|
|
482
|
|
-It is helpful to look at the Kubernetes commit history to be aware of the major topics. Although it
|
|
483
|
|
-can potentially break or change any part of Origin, the most affected parts are usually:
|
|
|
508
|
+It is helpful to look at the Kubernetes commit history to be aware of the major
|
|
|
509
|
+topics. Although it
|
|
|
510
|
+can potentially break or change any part of Origin, the most affected parts are
|
|
|
511
|
+usually:
|
|
484
|
512
|
|
|
485
|
513
|
1. https://github.com/openshift/origin/blob/master/pkg/cmd/server/start
|
|
486
|
|
-2. https://github.com/openshift/origin/blob/master/pkg/cmd/server/kubernetes/master.go
|
|
487
|
|
-3. https://github.com/openshift/origin/blob/master/pkg/cmd/server/origin/master.go
|
|
488
|
|
-4. https://github.com/openshift/origin/blob/master/pkg/cmd/util/clientcmd/factory.go
|
|
|
514
|
+2.
|
|
|
515
|
+https://github.com/openshift/origin/blob/master/pkg/cmd/server/kubernetes/master
|
|
|
516
|
+.go
|
|
|
517
|
+3.
|
|
|
518
|
+https://github.com/openshift/origin/blob/master/pkg/cmd/server/origin/master.go
|
|
|
519
|
+4.
|
|
|
520
|
+https://github.com/openshift/origin/blob/master/pkg/cmd/util/clientcmd/factory.g
|
|
|
521
|
+o
|
|
489
|
522
|
5. https://github.com/openshift/origin/blob/master/pkg/cmd/cli/cli.go
|
|
490
|
523
|
6. https://github.com/openshift/origin/blob/master/pkg/api/meta/meta.go
|
|
491
|
524
|
|
|
...
|
...
|
@@ -495,28 +594,38 @@ Place all your changes in a commit called "Refactor to match changes upstream".
|
|
495
|
495
|
|
|
496
|
496
|
A typical pull request for your Kubernetes rebase will contain:
|
|
497
|
497
|
|
|
498
|
|
-1. One commit for the Kuberentes Godeps bump (`bump(k8s.io/kubernetes):<commit SHA>`).
|
|
499
|
|
-2. Zero, one, or more bump commits for any **shared** dependencies between Origin and Kubernetes that have been bumped. Any transitive dependencies coming from Kubernetes should be squashed in the Kube bump commit.
|
|
|
498
|
+1. One commit for the Kuberentes Godeps bump (`bump(k8s.io/kubernetes):<commit
|
|
|
499
|
+SHA>`).
|
|
|
500
|
+2. Zero, one, or more bump commits for any **shared** dependencies between
|
|
|
501
|
+Origin and Kubernetes that have been bumped. Any transitive dependencies coming
|
|
|
502
|
+from Kubernetes should be squashed in the Kube bump commit.
|
|
500
|
503
|
3. Zero, one, or more cherry-picked commits tagged UPSTREAM.
|
|
501
|
|
-4. One commit "Boring refactor to match changes upstream" that includes boring changes like imports rewriting, etc.
|
|
502
|
|
-5. One commit "Interesting refactor to match changes upstream" that includes interesting changes like new plugins or controller changes.
|
|
|
504
|
+4. One commit "Boring refactor to match changes upstream" that includes boring
|
|
|
505
|
+changes like imports rewriting, etc.
|
|
|
506
|
+5. One commit "Interesting refactor to match changes upstream" that includes
|
|
|
507
|
+interesting changes like new plugins or controller changes.
|
|
503
|
508
|
|
|
504
|
509
|
## Updating other Godeps from upstream
|
|
505
|
510
|
|
|
506
|
|
-To update to a new version of a dependency that's not already included in Kubernetes, checkout the
|
|
507
|
|
-correct version in your GOPATH and then run `godep save <pkgname>`. This should create a new
|
|
508
|
|
-version of `Godeps/Godeps.json`, and update `vendor`. Create a commit that includes
|
|
509
|
|
-both of these changes with message `bump(<pkgname>): <pkgcommit>`.
|
|
|
511
|
+To update to a new version of a dependency that's not already included in
|
|
|
512
|
+Kubernetes, checkout the correct version in your GOPATH and then run
|
|
|
513
|
+`godep save <pkgname>`. This should create a new version of `Godeps/Godeps.json`,
|
|
|
514
|
+and update `vendor`. Create a commit that includes both of these changes with message
|
|
|
515
|
+`bump(<pkgname>): <pkgcommit>`.
|
|
510
|
516
|
|
|
511
|
517
|
## Updating external examples
|
|
512
|
518
|
|
|
513
|
|
-`hack/update-external-example.sh` will pull down example files from external repositories and deposit them under the `examples` directory.
|
|
514
|
|
-Run this script if you need to refresh an example file, or add a new one. See the script and `examples/quickstarts/README.md` for more details.
|
|
|
519
|
+`hack/update-external-example.sh` will pull down example files from external
|
|
|
520
|
+repositories and deposit them under the `examples` directory.
|
|
|
521
|
+Run this script if you need to refresh an example file, or add a new one. See
|
|
|
522
|
+the script and `examples/quickstarts/README.md` for more details.
|
|
515
|
523
|
|
|
516
|
524
|
## Troubleshooting
|
|
517
|
525
|
|
|
518
|
526
|
If you run into difficulties running OpenShift, start by reading through the
|
|
519
|
|
-[troubleshooting guide](https://github.com/openshift/origin/blob/master/docs/debugging-openshift.md).
|
|
|
527
|
+[troubleshooting
|
|
|
528
|
+guide](https://github.com/openshift/origin/blob/master/docs/debugging-openshift.
|
|
|
529
|
+md).
|
|
520
|
530
|
|
|
521
|
531
|
## RPM Packaging
|
|
522
|
532
|
|
|
...
|
...
|
@@ -529,8 +638,10 @@ and tag releases.
|
|
529
|
529
|
|
|
530
|
530
|
## GSSAPI-enabled builds
|
|
531
|
531
|
|
|
532
|
|
-When built with GSSAPI support, the `oc` client supports logging in with Kerberos credentials on Linux and OS X.
|
|
533
|
|
-GSSAPI-enabled builds of `oc` cannot be cross-compiled, but must be built on the target platform with the GSSAPI header files available.
|
|
|
532
|
+When built with GSSAPI support, the `oc` client supports logging in with
|
|
|
533
|
+Kerberos credentials on Linux and OS X.
|
|
|
534
|
+GSSAPI-enabled builds of `oc` cannot be cross-compiled, but must be built on
|
|
|
535
|
+the target platform with the GSSAPI header files available.
|
|
534
|
536
|
|
|
535
|
537
|
On Linux, ensure the `krb5-devel` package is installed:
|
|
536
|
538
|
|
|
...
|
...
|
@@ -544,7 +655,7 @@ Once dependencies are in place, build with the `gssapi` tag:
|
|
544
|
544
|
|
|
545
|
545
|
$ hack/build-go.sh cmd/oc -tags=gssapi
|
|
546
|
546
|
|
|
547
|
|
-Verify that the GSSAPI feature is enabled with `oc version`:
|
|
|
547
|
+Verify that the GSSAPI feature is enabled with `oc version`:
|
|
548
|
548
|
|
|
549
|
549
|
$ oc version
|
|
550
|
550
|
...
|
|
...
|
...
|
@@ -552,34 +663,57 @@ Verify that the GSSAPI feature is enabled with `oc version`:
|
|
552
|
552
|
|
|
553
|
553
|
## Swagger API Documentation
|
|
554
|
554
|
|
|
555
|
|
-OpenShift and Kubernetes integrate with the [Swagger 2.0 API framework](http://swagger.io) which aims to make it easier to document and write clients for RESTful APIs. When you start OpenShift, the Swagger API endpoint is exposed at `https://localhost:8443/swaggerapi`. The Swagger UI makes it easy to view your documentation - to view the docs for your local version of OpenShift start the server with CORS enabled:
|
|
|
555
|
+OpenShift and Kubernetes integrate with the [Swagger 2.0 API
|
|
|
556
|
+framework](http://swagger.io) which aims to make it easier to document and
|
|
|
557
|
+write clients for RESTful APIs. When you start OpenShift, the Swagger API
|
|
|
558
|
+endpoint is exposed at `https://localhost:8443/swaggerapi`. The Swagger UI
|
|
|
559
|
+makes it easy to view your documentation - to view the docs for your local
|
|
|
560
|
+version of OpenShift start the server with CORS enabled:
|
|
556
|
561
|
|
|
557
|
562
|
$ openshift start --cors-allowed-origins=.*
|
|
558
|
563
|
|
|
559
|
|
-and then browse to http://openshift3swagger-claytondev.rhcloud.com (which runs a copy of the Swagger UI that points to localhost:8080 by default). Expand the operations available on v1 to see the schemas (and to try the API directly). Additionally, you can download swagger-ui from http://swagger.io/swagger-ui/ and use it to point to your local swagger API endpoint.
|
|
|
564
|
+and then browse to http://openshift3swagger-claytondev.rhcloud.com (which runs
|
|
|
565
|
+a copy of the Swagger UI that points to localhost:8080 by default). Expand the
|
|
|
566
|
+operations available on v1 to see the schemas (and to try the API directly).
|
|
|
567
|
+Additionally, you can download swagger-ui from http://swagger.io/swagger-ui/
|
|
|
568
|
+and use it to point to your local swagger API endpoint.
|
|
560
|
569
|
|
|
561
|
|
-Note: Hosted API documentation can be found [here](http://docs.openshift.org/latest/rest_api/openshift_v1.html).
|
|
|
570
|
+Note: Hosted API documentation can be found
|
|
|
571
|
+[here](http://docs.openshift.org/latest/rest_api/openshift_v1.html).
|
|
562
|
572
|
|
|
563
|
573
|
|
|
564
|
574
|
## Performance debugging
|
|
565
|
575
|
|
|
566
|
|
-OpenShift integrates the go `pprof` tooling to make it easy to capture CPU and heap dumps for running systems. The following modes are available for the `openshift` binary (including all the CLI variants):
|
|
|
576
|
+OpenShift integrates the go `pprof` tooling to make it easy to capture CPU and
|
|
|
577
|
+heap dumps for running systems. The following modes are available for the
|
|
|
578
|
+`openshift` binary (including all the CLI variants):
|
|
567
|
579
|
|
|
568
|
580
|
* `OPENSHIFT_PROFILE` environment variable:
|
|
569
|
|
- * `cpu` - will start a CPU profile on startup and write `./cpu.pprof`. Contains samples for the entire run at the native sampling resolution (100hz). Note: CPU profiling for Go does not currently work on Mac OS X - the stats are not correctly sampled
|
|
570
|
|
- * `mem` - generate a running heap dump that tracks allocations to `./mem.pprof`
|
|
|
581
|
+ * `cpu` - will start a CPU profile on startup and write `./cpu.pprof`.
|
|
|
582
|
+Contains samples for the entire run at the native sampling resolution (100hz).
|
|
|
583
|
+Note: CPU profiling for Go does not currently work on Mac OS X - the stats are
|
|
|
584
|
+not correctly sampled
|
|
|
585
|
+ * `mem` - generate a running heap dump that tracks allocations to
|
|
|
586
|
+`./mem.pprof`
|
|
571
|
587
|
* `block` - will start a block wait time analysis and write `./block.pprof`
|
|
572
|
|
- * `web` - start the pprof webserver in process at http://127.0.0.1:6060/debug/pprof (you can open this in a browser). This supports `OPENSHIFT_PROFILE_HOST=` and `OPENSHIFT_PROFILE_PORT=` to change default ip `127.0.0.1` and default port `6060`.
|
|
|
588
|
+ * `web` - start the pprof webserver in process at http://127.0.0.1:6060/debug/pprof
|
|
|
589
|
+(you can open this in a browser). This supports `OPENSHIFT_PROFILE_HOST=`
|
|
|
590
|
+and `OPENSHIFT_PROFILE_PORT=` to change default ip `127.0.0.1` and default port `6060`.
|
|
573
|
591
|
|
|
574
|
592
|
In order to start the server in CPU profiling mode, run:
|
|
575
|
593
|
|
|
576
|
594
|
$ OPENSHIFT_PROFILE=cpu sudo ./_output/local/bin/linux/amd64/openshift start
|
|
577
|
595
|
|
|
578
|
|
-Or, if running OpenShift under systemd, append this to `/etc/sysconfig/atomic-openshift-{master,node}`
|
|
|
596
|
+Or, if running OpenShift under systemd, append this to
|
|
|
597
|
+`/etc/sysconfig/atomic-openshift-{master,node}`
|
|
579
|
598
|
|
|
580
|
599
|
OPENSHIFT_PROFILE=cpu
|
|
581
|
600
|
|
|
582
|
|
-To view profiles, you use [pprof](http://goog-perftools.sourceforge.net/doc/cpu_profiler.html) which is part of `go tool`. You must pass the binary you are debugging (for symbols) and a captured pprof. For instance, to view a `cpu` profile from above, you would run OpenShift to completion, and then run:
|
|
|
601
|
+To view profiles, you use
|
|
|
602
|
+[pprof](http://goog-perftools.sourceforge.net/doc/cpu_profiler.html) which is
|
|
|
603
|
+part of `go tool`. You must pass the binary you are debugging (for symbols)
|
|
|
604
|
+and a captured pprof. For instance, to view a `cpu` profile from above, you
|
|
|
605
|
+would run OpenShift to completion, and then run:
|
|
583
|
606
|
|
|
584
|
607
|
$ go tool pprof ./_output/local/bin/linux/amd64/openshift cpu.pprof
|
|
585
|
608
|
or
|
|
...
|
...
|
@@ -603,20 +737,28 @@ to see the top20 CPU consuming fields or
|
|
603
|
603
|
|
|
604
|
604
|
to launch a web browser window showing you where CPU time is going.
|
|
605
|
605
|
|
|
606
|
|
-`pprof` supports CLI arguments for looking at profiles in different ways - memory profiles by default show allocated space:
|
|
|
606
|
+`pprof` supports CLI arguments for looking at profiles in different ways -
|
|
|
607
|
+memory profiles by default show allocated space:
|
|
607
|
608
|
|
|
608
|
609
|
$ go tool pprof ./_output/local/bin/linux/amd64/openshift mem.pprof
|
|
609
|
610
|
|
|
610
|
611
|
but you can also see the allocated object counts:
|
|
611
|
612
|
|
|
612
|
|
- $ go tool pprof --alloc_objects ./_output/local/bin/linux/amd64/openshift mem.pprof
|
|
|
613
|
+ $ go tool pprof --alloc_objects ./_output/local/bin/linux/amd64/openshift
|
|
|
614
|
+mem.pprof
|
|
613
|
615
|
|
|
614
|
|
-Finally, when using the `web` profile mode, you can have the go tool directly fetch your profiles via HTTP:
|
|
|
616
|
+Finally, when using the `web` profile mode, you can have the go tool directly
|
|
|
617
|
+fetch your profiles via HTTP:
|
|
615
|
618
|
|
|
616
|
619
|
# for a 30s CPU trace
|
|
617
|
|
- $ go tool pprof ./_output/local/bin/linux/amd64/openshift http://127.0.0.1:6060/debug/pprof/profile
|
|
|
620
|
+ $ go tool pprof ./_output/local/bin/linux/amd64/openshift
|
|
|
621
|
+http://127.0.0.1:6060/debug/pprof/profile
|
|
618
|
622
|
|
|
619
|
623
|
# for a snapshot heap dump at the current time, showing total allocations
|
|
620
|
|
- $ go tool pprof --alloc_space ./_output/local/bin/linux/amd64/openshift http://127.0.0.1:6060/debug/pprof/heap
|
|
|
624
|
+ $ go tool pprof --alloc_space ./_output/local/bin/linux/amd64/openshift
|
|
|
625
|
+http://127.0.0.1:6060/debug/pprof/heap
|
|
621
|
626
|
|
|
622
|
|
-See [debugging Go programs](https://golang.org/pkg/net/http/pprof/) for more info. `pprof` has many modes and is very powerful (try `tree`) - you can pass a regex to many arguments to limit your results to only those samples that match the regex (basically the function name or the call stack).
|
|
|
627
|
+See [debugging Go programs](https://golang.org/pkg/net/http/pprof/) for more
|
|
|
628
|
+info. `pprof` has many modes and is very powerful (try `tree`) - you can pass
|
|
|
629
|
+a regex to many arguments to limit your results to only those samples that
|
|
|
630
|
+match the regex (basically the function name or the call stack).
|