Browse code

Update descriptions on all resources and remove definitions

With OpenAPI descriptions are moving to the types.

Clayton Coleman authored on 2016/09/19 06:41:29
Showing 46 changed files
1 1
deleted file mode 100644
... ...
@@ -1,3 +0,0 @@
1
-Build configurations define a build process for new Docker images. There are three types of builds possible - a Docker build using a Dockerfile, a Source-to-Image build that uses a specially prepared base image that accepts source code that it can make runnable, and a custom build that can run arbitrary Docker images as a base and accept the build parameters. Builds run on the cluster and on completion are pushed to the Docker registry specified in the "output" section. A build can be triggered via a webhook, when the base image changes, or when a user manually requests a new build be created.
2
-
3
-Each build created by a build configuration is numbered and refers back to its parent configuration. Multiple builds can be triggered at once. Builds that do not have "output" set can be used to test code or run a verification build.
4 1
\ No newline at end of file
5 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-CinderVolumeSource represents a cinder volume resource in Openstack. A Cinder volume must exist before mounting to a container. The volume must also be in the same region as the kubelet.
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1,3 +0,0 @@
1
-Deployment Configs define the template for a pod and manages deploying new images or configuration changes. A single deployment configuration is usually analogous to a single micro-service. Can support many different deployment patterns, including full restart, customizable rolling updates, and fully custom behaviors, as well as pre- and post- deployment hooks. Each individual deployment is represented as a replication controller.
2
-
3
-A deployment is "triggered" when its configuration is changed or a tag in an Image Stream is changed. Triggers can be disabled to allow manual control over a deployment. The "strategy"determines how the deployment is carried out and may be changed at any time.
4 1
\ No newline at end of file
5 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-A deployment log is a virtual resource used by the OpenShift client tool for retrieving the logs for a deployment.
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-DownwardAPIVolumeFile represents a single file containing information from the downward API
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-DownwardAPIVolumeSource represents a volume containing downward API info
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-FSGroupStrategyOptions is the strategy that will dictate what fs group is used by the SecurityContext.
2 1
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-IDRange provides a min/max of an allowed range of IDs.
2 1
deleted file mode 100644
... ...
@@ -1,8 +0,0 @@
1
-ImageSource is used to describe build source that will be extracted from an image. A reference of
2
-type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified
3
-to pull the image from an external registry or override the default service account secret if pulling
4
-from the internal registry. A list of paths to copy from the image and their respective destination
5
-within the build directory must be specified in the paths array.
6
-
7
-EXPERIMENTAL.  This will be changing to an array of images in the near future and no migration/compatibility 
8
-will be provided.  Use at your own risk.
9 1
\ No newline at end of file
10 2
deleted file mode 100644
... ...
@@ -1,3 +0,0 @@
1
-ImageSourcePath specifies the absolute path of a file or directory within a source image
2
-to be copied to a relative directory of the build home. If a source directory is specified, all 
3
-files and directories under that directory are copied from the image.
4 1
deleted file mode 100644
... ...
@@ -1,4 +0,0 @@
1
-The image stream import resource provides an easy way for a user to find and import Docker images from other Docker registries into the server. Individual images or an entire image repository may be imported, and users may choose to see the results of the import prior to tagging the resulting images into the specified image stream.
2
-
3
-This API is intended for end-user tools that need to see the metadata of the image prior to import (for instance, to generate an application from it). Clients that know the desired image can continue to create spec.tags directly into their image streams.
4
-
5 1
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-Local Resource Access Reviews are objects that allow you to determine which users and groups can perform a given action in a given namespace.
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-Local Subject Access Reviews are objects that allow you to determine whether a given user or group can perform a particular action in a given namespace.  Leaving `user` and `groups` empty allows you determine whether the identity making the request can perform the action.
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-NetNamespace represents a segregated network namespace for an entire cluster. When a group of pods, or a project, or a group of projects get a NetNamespace assigned then the openshift-sdn's multitenant plugin ensures network layer isolation of traffic from other NetNamespaces.
2 1
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-NetNamespaceList represents a list of NetNamespace objects. NetNamespace catpures information about a segregated network namespace for an entire cluster. When a group of pods, or a project, or a group of projects get a NetNamespace assigned then the openshift-sdn's multitenant plugin ensures network layer isolation of traffic from other NetNamespaces.
2 1
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-Nodes represent the machines that run the pods and containers in the cluster. A node resource is typically created and modified by the software running on the node - reporting information about capacity and the current health of the node. The labels of the node can be used by pods to specify a subset of the cluster to be scheduled on. The scheduler will only assign pods to nodes that have the `schedulable` condition set and also `ready`.
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1,3 +0,0 @@
1
-A Persistent Volume (PV) is a storage device that is made available for use by applications by an administrator. When a user requests persistent storage be allocated for a pod, they create a persistent volume claim with the size and type of storage they need. The system will look for persistent volumes that match that claim and, if one is available, it will assign that persistent volume to the claim. Information about the volume (type, location, secrets necessary to use it) will be available to the claim and the claim may then be used from a pod as a volume source.
2
-
3
-Deleting a persistent volume removes the cluster's record of the volume, and may result in automated processes destroying the underlying network store.
4 1
\ No newline at end of file
5 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-Persistent Volume Claims (PVC) represent a request to use a persistent volume (PV) with a pod. When creating a pod  definition (or replication controller or deployment config) a developer may specify the amount of storage they need via a persistent volume reference. If an administrator has enabled and configured persistent volumes for use, they will be allocated on demand to pods that have similar requirements. Since volumes are created lazily, some pods  may be scheduled to a node before their volume is assigned. The node will detect this situation and wait to start the pod until the volume is bound. Events will be generated (visible by using the `describe` command on the pod) that indicate the pod is waiting for volumes.
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1,3 +0,0 @@
1
-A pod corresponds to a group of containers running together on the same machine. All containers in a pod share an IP address, and may have access to shared volumes and local fileystem. Like individual application containers, pods are considered to be relatively ephemeral rather than durable entities. Pods are scheduled to nodes and remain there until termination (according to restart policy) or deletion. When a node dies, the pods scheduled to that node are deleted. Specific pods are never rescheduled to new nodes; instead, they must be replaced by a component like the replication controller.
2
-
3
-See link:https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/pods.md[the Kubernetes pod documentation] for more information.
4 1
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-Projects are the unit of isolation and collaboration in OpenShift. A project has one or more members, a quota on the resources that the project may consume, and the security controls on the resources in the project. Within a project, members may have different roles - project administrators can set membership, editors can create and manage the resources, and viewers can see but not access running containers. In a normal cluster project administrators are not able to alter their quotas - that is restricted to cluster administrators.
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints.
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-SecretBuildSource describes a secret and its destination directory that will be used only at the build time. The content of the secret referenced here will be copied into the destination directory instead of mounting.
2 1
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-A SecretSpec specifies a secret and its corresponding mount point for a custom builder. The specified secret must be assigned to the service account that will run the build.
2 1
deleted file mode 100644
... ...
@@ -1,7 +0,0 @@
1
-A Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label selector. Services broadly fall into two types - those that load balance a set of Pods and hide which Pod a client talks to (clusterIP set to an IP address), and those where clients want to talk to the individual member pods directly (clusterIP set to 'None', also known as 'headless'services). The cluster IP of a service is exposed as an environment variable in each pod in the same namespace.
2
-
3
-Services may be exposed only inside the cluster (type ClusterIP), inside the cluster and on a high port on each node (type NodePort), or exposed to a load balancer via the hosting cloud infrastructure (type LoadBalancer). Services with a ClusterIP may choose to map the ports available on the ClusterIP to different ports on the pods. Each service has a DNS entry of the form `<name>.<namespace>.svc.cluster.local` that will be valid from other pods in the cluster.
4
-
5
-If the selector for pods is not specified, the service endpoints may be managed by the client directly. Update the endpoint resource to program the service - this can be used to inject external network services into a namsepace.
6
-
7
-See link:https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md[the Kubernetes service documentation] for more information.
8 1
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-SupplementalGroupsStrategyOptions is the strategy that will dictate what supplemental groups are used by the SecurityContext.
2 1
deleted file mode 100644
... ...
@@ -1 +0,0 @@
1
-Upon log in, every user of the system receives a User and Identity resource. Administrators may directly manipulate the attributes of the users for their own tracking, or set groups via the API. The user name is unique and is chosen based on the value provided by the identity provider - if a user already exists with the incoming name, the user name may have a number appended to it.
2 1
\ No newline at end of file
3 2
deleted file mode 100644
... ...
@@ -1,7 +0,0 @@
1
-Scale is a subresource representing the current "scale" of certain objects, such as
2
-ReplicationControllers and DeploymentConfigs.  It may be checked to determine the current
3
-replica count of these objects, or updated to set the replica count of these objects.
4
-
5
-In the case of ReplicationControllers, this directly reflects the scale the ReplicationController.
6
-For DeploymentConfigs, it reflects the scale of the deployment(s), or, if none are present, the
7
-scale of the DeploymentConfig's template.
8 1
deleted file mode 120000
... ...
@@ -1 +0,0 @@
1
-../../definitions/v1.persistentvolumeclaim/description.adoc
2 1
\ No newline at end of file
... ...
@@ -65,7 +65,9 @@ message Build {
65 65
   optional BuildStatus status = 3;
66 66
 }
67 67
 
68
-// BuildConfig is a template which can be used to create new builds.
68
+// Build configurations define a build process for new Docker images. There are three types of builds possible - a Docker build using a Dockerfile, a Source-to-Image build that uses a specially prepared base image that accepts source code that it can make runnable, and a custom build that can run // arbitrary Docker images as a base and accept the build parameters. Builds run on the cluster and on completion are pushed to the Docker registry specified in the "output" section. A build can be triggered via a webhook, when the base image changes, or when a user manually requests a new build be // created.
69
+// 
70
+// Each build created by a build configuration is numbered and refers back to its parent configuration. Multiple builds can be triggered at once. Builds that do not have "output" set can be used to test code or run a verification build.
69 71
 message BuildConfig {
70 72
   // metadata for BuildConfig.
71 73
   optional k8s.io.kubernetes.pkg.api.v1.ObjectMeta metadata = 1;
... ...
@@ -642,7 +644,11 @@ message ImageChangeTrigger {
642 642
   optional k8s.io.kubernetes.pkg.api.v1.ObjectReference from = 2;
643 643
 }
644 644
 
645
-// ImageSource describes an image that is used as source for the build
645
+// ImageSource is used to describe build source that will be extracted from an image. A reference of
646
+// type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified
647
+// to pull the image from an external registry or override the default service account secret if pulling
648
+// from the internal registry. A list of paths to copy from the image and their respective destination
649
+// within the build directory must be specified in the paths array.
646 650
 message ImageSource {
647 651
   // from is a reference to an ImageStreamTag, ImageStreamImage, or DockerImage to
648 652
   // copy source from.
... ...
@@ -42,7 +42,7 @@ func (Build) SwaggerDoc() map[string]string {
42 42
 }
43 43
 
44 44
 var map_BuildConfig = map[string]string{
45
-	"":         "BuildConfig is a template which can be used to create new builds.",
45
+	"":         "Build configurations define a build process for new Docker images. There are three types of builds possible - a Docker build using a Dockerfile, a Source-to-Image build that uses a specially prepared base image that accepts source code that it can make runnable, and a custom build that can run // arbitrary Docker images as a base and accept the build parameters. Builds run on the cluster and on completion are pushed to the Docker registry specified in the \"output\" section. A build can be triggered via a webhook, when the base image changes, or when a user manually requests a new build be // created.\n\nEach build created by a build configuration is numbered and refers back to its parent configuration. Multiple builds can be triggered at once. Builds that do not have \"output\" set can be used to test code or run a verification build.",
46 46
 	"metadata": "metadata for BuildConfig.",
47 47
 	"spec":     "spec holds all the input necessary to produce a new build, and the conditions when to trigger them.",
48 48
 	"status":   "status holds any relevant information about a build config",
... ...
@@ -362,7 +362,7 @@ func (ImageChangeTrigger) SwaggerDoc() map[string]string {
362 362
 }
363 363
 
364 364
 var map_ImageSource = map[string]string{
365
-	"":           "ImageSource describes an image that is used as source for the build",
365
+	"":           "ImageSource is used to describe build source that will be extracted from an image. A reference of type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified to pull the image from an external registry or override the default service account secret if pulling from the internal registry. A list of paths to copy from the image and their respective destination within the build directory must be specified in the paths array.",
366 366
 	"from":       "from is a reference to an ImageStreamTag, ImageStreamImage, or DockerImage to copy source from.",
367 367
 	"paths":      "paths is a list of source and destination paths to copy from the image.",
368 368
 	"pullSecret": "pullSecret is a reference to a secret to be used to pull the image from a registry If the image is pulled from the OpenShift registry, this field does not need to be set.",
... ...
@@ -255,7 +255,11 @@ type BuildSource struct {
255 255
 	Secrets []SecretBuildSource `json:"secrets,omitempty" protobuf:"bytes,8,rep,name=secrets"`
256 256
 }
257 257
 
258
-// ImageSource describes an image that is used as source for the build
258
+// ImageSource is used to describe build source that will be extracted from an image. A reference of
259
+// type ImageStreamTag, ImageStreamImage or DockerImage may be used. A pull secret can be specified
260
+// to pull the image from an external registry or override the default service account secret if pulling
261
+// from the internal registry. A list of paths to copy from the image and their respective destination
262
+// within the build directory must be specified in the paths array.
259 263
 type ImageSource struct {
260 264
 	// from is a reference to an ImageStreamTag, ImageStreamImage, or DockerImage to
261 265
 	// copy source from.
... ...
@@ -611,7 +615,9 @@ type BuildOutput struct {
611 611
 	PushSecret *kapi.LocalObjectReference `json:"pushSecret,omitempty" protobuf:"bytes,2,opt,name=pushSecret"`
612 612
 }
613 613
 
614
-// BuildConfig is a template which can be used to create new builds.
614
+// Build configurations define a build process for new Docker images. There are three types of builds possible - a Docker build using a Dockerfile, a Source-to-Image build that uses a specially prepared base image that accepts source code that it can make runnable, and a custom build that can run // arbitrary Docker images as a base and accept the build parameters. Builds run on the cluster and on completion are pushed to the Docker registry specified in the "output" section. A build can be triggered via a webhook, when the base image changes, or when a user manually requests a new build be // created.
615
+//
616
+// Each build created by a build configuration is numbered and refers back to its parent configuration. Multiple builds can be triggered at once. Builds that do not have "output" set can be used to test code or run a verification build.
615 617
 type BuildConfig struct {
616 618
 	unversioned.TypeMeta `json:",inline"`
617 619
 	// metadata for BuildConfig.
... ...
@@ -43,10 +43,15 @@ message DeploymentCauseImageTrigger {
43 43
   optional k8s.io.kubernetes.pkg.api.v1.ObjectReference from = 1;
44 44
 }
45 45
 
46
-// DeploymentConfig represents a configuration for a single deployment (represented as a
47
-// ReplicationController). It also contains details about changes which resulted in the current
48
-// state of the DeploymentConfig. Each change to the DeploymentConfig which should result in
49
-// a new deployment results in an increment of LatestVersion.
46
+// Deployment Configs define the template for a pod and manages deploying new images or configuration changes.
47
+// A single deployment configuration is usually analogous to a single micro-service. Can support many different
48
+// deployment patterns, including full restart, customizable rolling updates, and  fully custom behaviors, as
49
+// well as pre- and post- deployment hooks. Each individual deployment is represented as a replication controller.
50
+// 
51
+// A deployment is "triggered" when its configuration is changed or a tag in an Image Stream is changed.
52
+// Triggers can be disabled to allow manual control over a deployment. The "strategy" determines how the deployment
53
+// is carried out and may be changed at any time. The `latestVersion` field is updated when a new deployment
54
+// is triggered by any means.
50 55
 message DeploymentConfig {
51 56
   // Standard object's metadata.
52 57
   optional k8s.io.kubernetes.pkg.api.v1.ObjectMeta metadata = 1;
... ...
@@ -36,7 +36,7 @@ func (DeploymentCauseImageTrigger) SwaggerDoc() map[string]string {
36 36
 }
37 37
 
38 38
 var map_DeploymentConfig = map[string]string{
39
-	"":         "DeploymentConfig represents a configuration for a single deployment (represented as a ReplicationController). It also contains details about changes which resulted in the current state of the DeploymentConfig. Each change to the DeploymentConfig which should result in a new deployment results in an increment of LatestVersion.",
39
+	"":         "Deployment Configs define the template for a pod and manages deploying new images or configuration changes. A single deployment configuration is usually analogous to a single micro-service. Can support many different deployment patterns, including full restart, customizable rolling updates, and  fully custom behaviors, as well as pre- and post- deployment hooks. Each individual deployment is represented as a replication controller.\n\nA deployment is \"triggered\" when its configuration is changed or a tag in an Image Stream is changed. Triggers can be disabled to allow manual control over a deployment. The \"strategy\" determines how the deployment is carried out and may be changed at any time. The `latestVersion` field is updated when a new deployment is triggered by any means.",
40 40
 	"metadata": "Standard object's metadata.",
41 41
 	"spec":     "Spec represents a desired deployment state and how to deploy to it.",
42 42
 	"status":   "Status represents the current deployment state.",
... ...
@@ -238,10 +238,15 @@ const (
238 238
 
239 239
 // +genclient=true
240 240
 
241
-// DeploymentConfig represents a configuration for a single deployment (represented as a
242
-// ReplicationController). It also contains details about changes which resulted in the current
243
-// state of the DeploymentConfig. Each change to the DeploymentConfig which should result in
244
-// a new deployment results in an increment of LatestVersion.
241
+// Deployment Configs define the template for a pod and manages deploying new images or configuration changes.
242
+// A single deployment configuration is usually analogous to a single micro-service. Can support many different
243
+// deployment patterns, including full restart, customizable rolling updates, and  fully custom behaviors, as
244
+// well as pre- and post- deployment hooks. Each individual deployment is represented as a replication controller.
245
+//
246
+// A deployment is "triggered" when its configuration is changed or a tag in an Image Stream is changed.
247
+// Triggers can be disabled to allow manual control over a deployment. The "strategy" determines how the deployment
248
+// is carried out and may be changed at any time. The `latestVersion` field is updated when a new deployment
249
+// is triggered by any means.
245 250
 type DeploymentConfig struct {
246 251
 	unversioned.TypeMeta `json:",inline"`
247 252
 	// Standard object's metadata.
... ...
@@ -172,7 +172,14 @@ message ImageStreamImage {
172 172
   optional Image image = 2;
173 173
 }
174 174
 
175
-// ImageStreamImport imports an image from remote repositories into OpenShift.
175
+// The image stream import resource provides an easy way for a user to find and import Docker images
176
+// from other Docker registries into the server. Individual images or an entire image repository may
177
+// be imported, and users may choose to see the results of the import prior to tagging the resulting
178
+// images into the specified image stream.
179
+// 
180
+// This API is intended for end-user tools that need to see the metadata of the image prior to import
181
+// (for instance, to generate an application from it). Clients that know the desired image can continue
182
+// to create spec.tags directly into their image streams.
176 183
 message ImageStreamImport {
177 184
   // Standard object's metadata.
178 185
   optional k8s.io.kubernetes.pkg.api.v1.ObjectMeta metadata = 1;
... ...
@@ -119,7 +119,7 @@ func (ImageStreamImage) SwaggerDoc() map[string]string {
119 119
 }
120 120
 
121 121
 var map_ImageStreamImport = map[string]string{
122
-	"":         "ImageStreamImport imports an image from remote repositories into OpenShift.",
122
+	"":         "The image stream import resource provides an easy way for a user to find and import Docker images from other Docker registries into the server. Individual images or an entire image repository may be imported, and users may choose to see the results of the import prior to tagging the resulting images into the specified image stream.\n\nThis API is intended for end-user tools that need to see the metadata of the image prior to import (for instance, to generate an application from it). Clients that know the desired image can continue to create spec.tags directly into their image streams.",
123 123
 	"metadata": "Standard object's metadata.",
124 124
 	"spec":     "Spec is a description of the images that the user wishes to import",
125 125
 	"status":   "Status is the the result of importing the image",
... ...
@@ -314,7 +314,14 @@ type DockerImageReference struct {
314 314
 	ID string `protobuf:"bytes,5,opt,name=iD"`
315 315
 }
316 316
 
317
-// ImageStreamImport imports an image from remote repositories into OpenShift.
317
+// The image stream import resource provides an easy way for a user to find and import Docker images
318
+// from other Docker registries into the server. Individual images or an entire image repository may
319
+// be imported, and users may choose to see the results of the import prior to tagging the resulting
320
+// images into the specified image stream.
321
+//
322
+// This API is intended for end-user tools that need to see the metadata of the image prior to import
323
+// (for instance, to generate an application from it). Clients that know the desired image can continue
324
+// to create spec.tags directly into their image streams.
318 325
 type ImageStreamImport struct {
319 326
 	unversioned.TypeMeta `json:",inline"`
320 327
 	// Standard object's metadata.
... ...
@@ -13,7 +13,18 @@ import "k8s.io/kubernetes/pkg/util/intstr/generated.proto";
13 13
 // Package-wide variables from generator "generated".
14 14
 option go_package = "v1";
15 15
 
16
-// Project is a logical top-level container for a set of origin resources
16
+// Projects are the unit of isolation and collaboration in OpenShift. A project has one or more members,
17
+// a quota on the resources that the project may consume, and the security controls on the resources in
18
+// the project. Within a project, members may have different roles - project administrators can set
19
+// membership, editors can create and manage the resources, and viewers can see but not access running
20
+// containers. In a normal cluster project administrators are not able to alter their quotas - that is
21
+// restricted to cluster administrators.
22
+// 
23
+// Listing or watching projects will return only projects the user has the reader role on.
24
+// 
25
+// An OpenShift project is an alternative representation of a Kubernetes namespace. Projects are exposed
26
+// as editable to end users while namespaces are not. Direct creation of a project is typically restricted
27
+// to administrators, while end users should use the requestproject resource.
17 28
 message Project {
18 29
   // Standard object's metadata.
19 30
   optional k8s.io.kubernetes.pkg.api.v1.ObjectMeta metadata = 1;
... ...
@@ -6,7 +6,7 @@ package v1
6 6
 // ==== DO NOT EDIT THIS FILE MANUALLY ====
7 7
 
8 8
 var map_Project = map[string]string{
9
-	"":         "Project is a logical top-level container for a set of origin resources",
9
+	"":         "Projects are the unit of isolation and collaboration in OpenShift. A project has one or more members, a quota on the resources that the project may consume, and the security controls on the resources in the project. Within a project, members may have different roles - project administrators can set membership, editors can create and manage the resources, and viewers can see but not access running containers. In a normal cluster project administrators are not able to alter their quotas - that is restricted to cluster administrators.\n\nListing or watching projects will return only projects the user has the reader role on.\n\nAn OpenShift project is an alternative representation of a Kubernetes namespace. Projects are exposed as editable to end users while namespaces are not. Direct creation of a project is typically restricted to administrators, while end users should use the requestproject resource.",
10 10
 	"metadata": "Standard object's metadata.",
11 11
 	"spec":     "Spec defines the behavior of the Namespace.",
12 12
 	"status":   "Status describes the current status of a Namespace",
... ...
@@ -33,7 +33,18 @@ type ProjectStatus struct {
33 33
 
34 34
 // +genclient=true
35 35
 
36
-// Project is a logical top-level container for a set of origin resources
36
+// Projects are the unit of isolation and collaboration in OpenShift. A project has one or more members,
37
+// a quota on the resources that the project may consume, and the security controls on the resources in
38
+// the project. Within a project, members may have different roles - project administrators can set
39
+// membership, editors can create and manage the resources, and viewers can see but not access running
40
+// containers. In a normal cluster project administrators are not able to alter their quotas - that is
41
+// restricted to cluster administrators.
42
+//
43
+// Listing or watching projects will return only projects the user has the reader role on.
44
+//
45
+// An OpenShift project is an alternative representation of a Kubernetes namespace. Projects are exposed
46
+// as editable to end users while namespaces are not. Direct creation of a project is typically restricted
47
+// to administrators, while end users should use the requestproject resource.
37 48
 type Project struct {
38 49
 	unversioned.TypeMeta `json:",inline"`
39 50
 	// Standard object's metadata.
... ...
@@ -13,19 +13,35 @@ import "k8s.io/kubernetes/pkg/util/intstr/generated.proto";
13 13
 // Package-wide variables from generator "generated".
14 14
 option go_package = "v1";
15 15
 
16
-// Route encapsulates the inputs needed to connect an alias to endpoints.
16
+// A route allows developers to expose services through an HTTP(S) aware load balancing and proxy
17
+// layer via a public DNS entry. The route may further specify TLS options and a certificate, or
18
+// specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An
19
+// administrator typically configures their router to be visible outside the cluster firewall, and
20
+// may also add additional security, caching, or traffic controls on the service content. Routers
21
+// usually talk directly to the service endpoints.
22
+// 
23
+// Once a route is created, the `host` field may not be changed. Generally, routers use the oldest
24
+// route with a given host when resolving conflicts.
25
+// 
26
+// Routers are subject to additional customization and may support additional controls via the
27
+// annotations field.
28
+// 
29
+// Because administrators may configure multiple routers, the route status field is used to
30
+// return information to clients about the names and states of the route under each router.
31
+// If a client chooses a duplicate name, for instance, the route status conditions are used
32
+// to indicate the route cannot be chosen.
17 33
 message Route {
18
-  // Standard object's metadata.
34
+  // Standard object metadata.
19 35
   optional k8s.io.kubernetes.pkg.api.v1.ObjectMeta metadata = 1;
20 36
 
21
-  // Spec is the desired state of the route
37
+  // spec is the desired state of the route
22 38
   optional RouteSpec spec = 2;
23 39
 
24
-  // Status is the current state of the route
40
+  // status is the current state of the route
25 41
   optional RouteStatus status = 3;
26 42
 }
27 43
 
28
-// RouteIngress holds information about the places where a route is exposed
44
+// RouteIngress holds information about the places where a route is exposed.
29 45
 message RouteIngress {
30 46
   // Host is the host string under which the route is exposed; this value is required
31 47
   optional string host = 1;
... ...
@@ -37,8 +53,8 @@ message RouteIngress {
37 37
   repeated RouteIngressCondition conditions = 3;
38 38
 }
39 39
 
40
-// RouteIngressCondition contains details for the current condition of this pod.
41
-// TODO: add LastTransitionTime, Reason, Message to match NodeCondition api.
40
+// RouteIngressCondition contains details for the current condition of this route on a particular
41
+// router.
42 42
 message RouteIngressCondition {
43 43
   // Type is the type of the condition.
44 44
   // Currently only Ready.
... ...
@@ -61,10 +77,10 @@ message RouteIngressCondition {
61 61
 
62 62
 // RouteList is a collection of Routes.
63 63
 message RouteList {
64
-  // Standard object's metadata.
64
+  // Standard object metadata.
65 65
   optional k8s.io.kubernetes.pkg.api.unversioned.ListMeta metadata = 1;
66 66
 
67
-  // Items is a list of routes
67
+  // items is a list of routes
68 68
   repeated Route items = 2;
69 69
 }
70 70
 
... ...
@@ -76,22 +92,34 @@ message RoutePort {
76 76
   optional k8s.io.kubernetes.pkg.util.intstr.IntOrString targetPort = 1;
77 77
 }
78 78
 
79
-// RouteSpec describes the route the user wishes to exist.
79
+// RouteSpec describes the hostname or path the route exposes, any security information,
80
+// and one or more backends the route points to. Weights on each backend can define
81
+// the balance of traffic sent to each backend - if all weights are zero the route will
82
+// be considered to have no backends and return a standard 503 response.
83
+// 
84
+// The `tls` field is optional and allows specific certificates or behavior for the
85
+// route. Routers typically configure a default certificate on a wildcard domain to
86
+// terminate routes without explicit certificates, but custom hostnames usually must
87
+// choose passthrough (send traffic directly to the backend via the TLS Server-Name-
88
+// Indication field) or provide a certificate.
80 89
 message RouteSpec {
81
-  // Host is an alias/DNS that points to the service. Optional
90
+  // host is an alias/DNS that points to the service. Optional.
91
+  // If not specified a route name will typically be automatically
92
+  // chosen.
82 93
   // Must follow DNS952 subdomain conventions.
83 94
   optional string host = 1;
84 95
 
85 96
   // Path that the router watches for, to route traffic for to the service. Optional
86 97
   optional string path = 2;
87 98
 
88
-  // To is an object the route points to. Only the Service kind is allowed, and it will
89
-  // be defaulted to Service.
99
+  // to is an object the route should use as the primary backend. Only the Service kind
100
+  // is allowed, and it will be defaulted to Service. If the weight field is set to zero,
101
+  // no traffic will be sent to this service.
90 102
   optional RouteTargetReference to = 3;
91 103
 
92
-  // AlternateBackends is an extension of the 'to' field. If more than one service needs to be
104
+  // alternateBackends is an extension of the 'to' field. If more than one service needs to be
93 105
   // pointed to, then use this field. Use the weight field in RouteTargetReference object
94
-  // to specify relative preference
106
+  // to specify relative preference. If the weight field is zero, the backend is ignored.
95 107
   repeated RouteTargetReference alternateBackends = 4;
96 108
 
97 109
   // If specified, the port to be used by the router. Most routers will use all
... ...
@@ -99,14 +127,14 @@ message RouteSpec {
99 99
   // which port to use.
100 100
   optional RoutePort port = 5;
101 101
 
102
-  // TLS provides the ability to configure certificates and termination for the route
102
+  // The tls field provides the ability to configure certificates and termination for the route.
103 103
   optional TLSConfig tls = 6;
104 104
 }
105 105
 
106 106
 // RouteStatus provides relevant info about the status of a route, including which routers
107 107
 // acknowledge it.
108 108
 message RouteStatus {
109
-  // Ingress describes the places where the route may be exposed. The list of
109
+  // ingress describes the places where the route may be exposed. The list of
110 110
   // ingress points may contain duplicate Host or RouterName values. Routes
111 111
   // are considered live once they are `Ready`
112 112
   repeated RouteIngress ingress = 1;
... ...
@@ -118,10 +146,10 @@ message RouteTargetReference {
118 118
   // The kind of target that the route is referring to. Currently, only 'Service' is allowed
119 119
   optional string kind = 1;
120 120
 
121
-  // Name of the service/target that is being referred to. e.g. name of the service
121
+  // name of the service/target that is being referred to. e.g. name of the service
122 122
   optional string name = 2;
123 123
 
124
-  // Weight as an integer between 1 and 256 that specifies the target's relative weight
124
+  // weight as an integer between 1 and 256 that specifies the target's relative weight
125 125
   // against other target reference objects
126 126
   optional int32 weight = 3;
127 127
 }
... ...
@@ -132,35 +160,38 @@ message RouteTargetReference {
132 132
 // Caveat: This is WIP and will likely undergo modifications when sharding
133 133
 //         support is added.
134 134
 message RouterShard {
135
-  // ShardName uniquely identifies a router shard in the "set" of
135
+  // shardName uniquely identifies a router shard in the "set" of
136 136
   // routers used for routing traffic to the services.
137 137
   optional string shardName = 1;
138 138
 
139
-  // DNSSuffix for the shard ala: shard-1.v3.openshift.com
139
+  // dnsSuffix for the shard ala: shard-1.v3.openshift.com
140 140
   optional string dnsSuffix = 2;
141 141
 }
142 142
 
143 143
 // TLSConfig defines config used to secure a route and provide termination
144 144
 message TLSConfig {
145
-  // Termination indicates termination type.
145
+  // termination indicates termination type.
146 146
   optional string termination = 1;
147 147
 
148
-  // Certificate provides certificate contents
148
+  // certificate provides certificate contents
149 149
   optional string certificate = 2;
150 150
 
151
-  // Key provides key file contents
151
+  // key provides key file contents
152 152
   optional string key = 3;
153 153
 
154
-  // CACertificate provides the cert authority certificate contents
154
+  // caCertificate provides the cert authority certificate contents
155 155
   optional string caCertificate = 4;
156 156
 
157
-  // DestinationCACertificate provides the contents of the ca certificate of the final destination.  When using reencrypt
157
+  // destinationCACertificate provides the contents of the ca certificate of the final destination.  When using reencrypt
158 158
   // termination this file should be provided in order to have routers use it for health checks on the secure connection
159 159
   optional string destinationCACertificate = 5;
160 160
 
161
-  // InsecureEdgeTerminationPolicy indicates the desired behavior for
162
-  // insecure connections to an edge-terminated route:
163
-  //   disable, allow or redirect
161
+  // insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While
162
+  // each router may make its own decisions on which ports to expose, this is normally port 80.
163
+  // 
164
+  // * Allow - traffic is sent to the server on the insecure port (default)
165
+  // * Disable - no traffic is allowed on the insecure port.
166
+  // * Redirect - clients are redirected to the secure port.
164 167
   optional string insecureEdgeTerminationPolicy = 6;
165 168
 }
166 169
 
... ...
@@ -6,10 +6,10 @@ package v1
6 6
 // ==== DO NOT EDIT THIS FILE MANUALLY ====
7 7
 
8 8
 var map_Route = map[string]string{
9
-	"":         "Route encapsulates the inputs needed to connect an alias to endpoints.",
10
-	"metadata": "Standard object's metadata.",
11
-	"spec":     "Spec is the desired state of the route",
12
-	"status":   "Status is the current state of the route",
9
+	"":         "A route allows developers to expose services through an HTTP(S) aware load balancing and proxy layer via a public DNS entry. The route may further specify TLS options and a certificate, or specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An administrator typically configures their router to be visible outside the cluster firewall, and may also add additional security, caching, or traffic controls on the service content. Routers usually talk directly to the service endpoints.\n\nOnce a route is created, the `host` field may not be changed. Generally, routers use the oldest route with a given host when resolving conflicts.\n\nRouters are subject to additional customization and may support additional controls via the annotations field.\n\nBecause administrators may configure multiple routers, the route status field is used to return information to clients about the names and states of the route under each router. If a client chooses a duplicate name, for instance, the route status conditions are used to indicate the route cannot be chosen.",
10
+	"metadata": "Standard object metadata.",
11
+	"spec":     "spec is the desired state of the route",
12
+	"status":   "status is the current state of the route",
13 13
 }
14 14
 
15 15
 func (Route) SwaggerDoc() map[string]string {
... ...
@@ -17,7 +17,7 @@ func (Route) SwaggerDoc() map[string]string {
17 17
 }
18 18
 
19 19
 var map_RouteIngress = map[string]string{
20
-	"":           "RouteIngress holds information about the places where a route is exposed",
20
+	"":           "RouteIngress holds information about the places where a route is exposed.",
21 21
 	"host":       "Host is the host string under which the route is exposed; this value is required",
22 22
 	"routerName": "Name is a name chosen by the router to identify itself; this value is required",
23 23
 	"conditions": "Conditions is the state of the route, may be empty.",
... ...
@@ -28,7 +28,7 @@ func (RouteIngress) SwaggerDoc() map[string]string {
28 28
 }
29 29
 
30 30
 var map_RouteIngressCondition = map[string]string{
31
-	"":                   "RouteIngressCondition contains details for the current condition of this pod.",
31
+	"":                   "RouteIngressCondition contains details for the current condition of this route on a particular router.",
32 32
 	"type":               "Type is the type of the condition. Currently only Ready.",
33 33
 	"status":             "Status is the status of the condition. Can be True, False, Unknown.",
34 34
 	"reason":             "(brief) reason for the condition's last transition, and is usually a machine and human readable constant",
... ...
@@ -42,8 +42,8 @@ func (RouteIngressCondition) SwaggerDoc() map[string]string {
42 42
 
43 43
 var map_RouteList = map[string]string{
44 44
 	"":         "RouteList is a collection of Routes.",
45
-	"metadata": "Standard object's metadata.",
46
-	"items":    "Items is a list of routes",
45
+	"metadata": "Standard object metadata.",
46
+	"items":    "items is a list of routes",
47 47
 }
48 48
 
49 49
 func (RouteList) SwaggerDoc() map[string]string {
... ...
@@ -60,13 +60,13 @@ func (RoutePort) SwaggerDoc() map[string]string {
60 60
 }
61 61
 
62 62
 var map_RouteSpec = map[string]string{
63
-	"":                  "RouteSpec describes the route the user wishes to exist.",
64
-	"host":              "Host is an alias/DNS that points to the service. Optional Must follow DNS952 subdomain conventions.",
63
+	"":                  "RouteSpec describes the hostname or path the route exposes, any security information, and one or more backends the route points to. Weights on each backend can define the balance of traffic sent to each backend - if all weights are zero the route will be considered to have no backends and return a standard 503 response.\n\nThe `tls` field is optional and allows specific certificates or behavior for the route. Routers typically configure a default certificate on a wildcard domain to terminate routes without explicit certificates, but custom hostnames usually must choose passthrough (send traffic directly to the backend via the TLS Server-Name- Indication field) or provide a certificate.",
64
+	"host":              "host is an alias/DNS that points to the service. Optional. If not specified a route name will typically be automatically chosen. Must follow DNS952 subdomain conventions.",
65 65
 	"path":              "Path that the router watches for, to route traffic for to the service. Optional",
66
-	"to":                "To is an object the route points to. Only the Service kind is allowed, and it will be defaulted to Service.",
67
-	"alternateBackends": "AlternateBackends is an extension of the 'to' field. If more than one service needs to be pointed to, then use this field. Use the weight field in RouteTargetReference object to specify relative preference",
66
+	"to":                "to is an object the route should use as the primary backend. Only the Service kind is allowed, and it will be defaulted to Service. If the weight field is set to zero, no traffic will be sent to this service.",
67
+	"alternateBackends": "alternateBackends is an extension of the 'to' field. If more than one service needs to be pointed to, then use this field. Use the weight field in RouteTargetReference object to specify relative preference. If the weight field is zero, the backend is ignored.",
68 68
 	"port":              "If specified, the port to be used by the router. Most routers will use all endpoints exposed by the service by default - set this value to instruct routers which port to use.",
69
-	"tls":               "TLS provides the ability to configure certificates and termination for the route",
69
+	"tls":               "The tls field provides the ability to configure certificates and termination for the route.",
70 70
 }
71 71
 
72 72
 func (RouteSpec) SwaggerDoc() map[string]string {
... ...
@@ -75,7 +75,7 @@ func (RouteSpec) SwaggerDoc() map[string]string {
75 75
 
76 76
 var map_RouteStatus = map[string]string{
77 77
 	"":        "RouteStatus provides relevant info about the status of a route, including which routers acknowledge it.",
78
-	"ingress": "Ingress describes the places where the route may be exposed. The list of ingress points may contain duplicate Host or RouterName values. Routes are considered live once they are `Ready`",
78
+	"ingress": "ingress describes the places where the route may be exposed. The list of ingress points may contain duplicate Host or RouterName values. Routes are considered live once they are `Ready`",
79 79
 }
80 80
 
81 81
 func (RouteStatus) SwaggerDoc() map[string]string {
... ...
@@ -85,8 +85,8 @@ func (RouteStatus) SwaggerDoc() map[string]string {
85 85
 var map_RouteTargetReference = map[string]string{
86 86
 	"":       "RouteTargetReference specifies the target that resolve into endpoints. Only the 'Service' kind is allowed. Use 'weight' field to emphasize one over others.",
87 87
 	"kind":   "The kind of target that the route is referring to. Currently, only 'Service' is allowed",
88
-	"name":   "Name of the service/target that is being referred to. e.g. name of the service",
89
-	"weight": "Weight as an integer between 1 and 256 that specifies the target's relative weight against other target reference objects",
88
+	"name":   "name of the service/target that is being referred to. e.g. name of the service",
89
+	"weight": "weight as an integer between 1 and 256 that specifies the target's relative weight against other target reference objects",
90 90
 }
91 91
 
92 92
 func (RouteTargetReference) SwaggerDoc() map[string]string {
... ...
@@ -95,8 +95,8 @@ func (RouteTargetReference) SwaggerDoc() map[string]string {
95 95
 
96 96
 var map_RouterShard = map[string]string{
97 97
 	"":          "RouterShard has information of a routing shard and is used to generate host names and routing table entries when a routing shard is allocated for a specific route. Caveat: This is WIP and will likely undergo modifications when sharding\n        support is added.",
98
-	"shardName": "ShardName uniquely identifies a router shard in the \"set\" of routers used for routing traffic to the services.",
99
-	"dnsSuffix": "DNSSuffix for the shard ala: shard-1.v3.openshift.com",
98
+	"shardName": "shardName uniquely identifies a router shard in the \"set\" of routers used for routing traffic to the services.",
99
+	"dnsSuffix": "dnsSuffix for the shard ala: shard-1.v3.openshift.com",
100 100
 }
101 101
 
102 102
 func (RouterShard) SwaggerDoc() map[string]string {
... ...
@@ -105,12 +105,12 @@ func (RouterShard) SwaggerDoc() map[string]string {
105 105
 
106 106
 var map_TLSConfig = map[string]string{
107 107
 	"":                              "TLSConfig defines config used to secure a route and provide termination",
108
-	"termination":                   "Termination indicates termination type.",
109
-	"certificate":                   "Certificate provides certificate contents",
110
-	"key":                           "Key provides key file contents",
111
-	"caCertificate":                 "CACertificate provides the cert authority certificate contents",
112
-	"destinationCACertificate":      "DestinationCACertificate provides the contents of the ca certificate of the final destination.  When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection",
113
-	"insecureEdgeTerminationPolicy": "InsecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to an edge-terminated route:\n  disable, allow or redirect",
108
+	"termination":                   "termination indicates termination type.",
109
+	"certificate":                   "certificate provides certificate contents",
110
+	"key":                           "key provides key file contents",
111
+	"caCertificate":                 "caCertificate provides the cert authority certificate contents",
112
+	"destinationCACertificate":      "destinationCACertificate provides the contents of the ca certificate of the final destination.  When using reencrypt termination this file should be provided in order to have routers use it for health checks on the secure connection",
113
+	"insecureEdgeTerminationPolicy": "insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While each router may make its own decisions on which ports to expose, this is normally port 80.\n\n* Allow - traffic is sent to the server on the insecure port (default) * Disable - no traffic is allowed on the insecure port. * Redirect - clients are redirected to the secure port.",
114 114
 }
115 115
 
116 116
 func (TLSConfig) SwaggerDoc() map[string]string {
... ...
@@ -8,46 +8,71 @@ import (
8 8
 
9 9
 // +genclient=true
10 10
 
11
-// Route encapsulates the inputs needed to connect an alias to endpoints.
11
+// A route allows developers to expose services through an HTTP(S) aware load balancing and proxy
12
+// layer via a public DNS entry. The route may further specify TLS options and a certificate, or
13
+// specify a public CNAME that the router should also accept for HTTP and HTTPS traffic. An
14
+// administrator typically configures their router to be visible outside the cluster firewall, and
15
+// may also add additional security, caching, or traffic controls on the service content. Routers
16
+// usually talk directly to the service endpoints.
17
+//
18
+// Once a route is created, the `host` field may not be changed. Generally, routers use the oldest
19
+// route with a given host when resolving conflicts.
20
+//
21
+// Routers are subject to additional customization and may support additional controls via the
22
+// annotations field.
23
+//
24
+// Because administrators may configure multiple routers, the route status field is used to
25
+// return information to clients about the names and states of the route under each router.
26
+// If a client chooses a duplicate name, for instance, the route status conditions are used
27
+// to indicate the route cannot be chosen.
12 28
 type Route struct {
13 29
 	unversioned.TypeMeta `json:",inline"`
14
-	// Standard object's metadata.
30
+	// Standard object metadata.
15 31
 	kapi.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
16 32
 
17
-	// Spec is the desired state of the route
33
+	// spec is the desired state of the route
18 34
 	Spec RouteSpec `json:"spec" protobuf:"bytes,2,opt,name=spec"`
19
-	// Status is the current state of the route
35
+	// status is the current state of the route
20 36
 	Status RouteStatus `json:"status" protobuf:"bytes,3,opt,name=status"`
21 37
 }
22 38
 
23 39
 // RouteList is a collection of Routes.
24 40
 type RouteList struct {
25 41
 	unversioned.TypeMeta `json:",inline"`
26
-	// Standard object's metadata.
42
+	// Standard object metadata.
27 43
 	unversioned.ListMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
28 44
 
29
-	// Items is a list of routes
45
+	// items is a list of routes
30 46
 	Items []Route `json:"items" protobuf:"bytes,2,rep,name=items"`
31 47
 }
32 48
 
33
-// RouteSpec describes the route the user wishes to exist.
49
+// RouteSpec describes the hostname or path the route exposes, any security information,
50
+// and one or more backends the route points to. Weights on each backend can define
51
+// the balance of traffic sent to each backend - if all weights are zero the route will
52
+// be considered to have no backends and return a standard 503 response.
53
+//
54
+// The `tls` field is optional and allows specific certificates or behavior for the
55
+// route. Routers typically configure a default certificate on a wildcard domain to
56
+// terminate routes without explicit certificates, but custom hostnames usually must
57
+// choose passthrough (send traffic directly to the backend via the TLS Server-Name-
58
+// Indication field) or provide a certificate.
34 59
 type RouteSpec struct {
35
-	// Ports are the ports that the user wishes to expose.
36
-	//Ports []RoutePort `json:"ports,omitempty"`
37
-
38
-	// Host is an alias/DNS that points to the service. Optional
60
+	// host is an alias/DNS that points to the service. Optional.
61
+	// If not specified a route name will typically be automatically
62
+	// chosen.
39 63
 	// Must follow DNS952 subdomain conventions.
40 64
 	Host string `json:"host" protobuf:"bytes,1,opt,name=host"`
41 65
 	// Path that the router watches for, to route traffic for to the service. Optional
42 66
 	Path string `json:"path,omitempty" protobuf:"bytes,2,opt,name=path"`
43 67
 
44
-	// To is an object the route points to. Only the Service kind is allowed, and it will
45
-	// be defaulted to Service.
68
+	// to is an object the route should use as the primary backend. Only the Service kind
69
+	// is allowed, and it will be defaulted to Service. If the weight field is set to zero,
70
+	// no traffic will be sent to this service.
46 71
 	To RouteTargetReference `json:"to" protobuf:"bytes,3,opt,name=to"`
47 72
 
48
-	// AlternateBackends is an extension of the 'to' field. If more than one service needs to be
73
+	// alternateBackends is an extension of the 'to' field. If more than one service needs to be
49 74
 	// pointed to, then use this field. Use the weight field in RouteTargetReference object
50
-	// to specify relative preference
75
+	// to specify relative preference. If the weight field is zero, the backend is ignored.
51 76
 	AlternateBackends []RouteTargetReference `json:"alternateBackends,omitempty" protobuf:"bytes,4,rep,name=alternateBackends"`
52 77
 
53 78
 	// If specified, the port to be used by the router. Most routers will use all
... ...
@@ -55,7 +80,7 @@ type RouteSpec struct {
55 55
 	// which port to use.
56 56
 	Port *RoutePort `json:"port,omitempty" protobuf:"bytes,5,opt,name=port"`
57 57
 
58
-	// TLS provides the ability to configure certificates and termination for the route
58
+	// The tls field provides the ability to configure certificates and termination for the route.
59 59
 	TLS *TLSConfig `json:"tls,omitempty" protobuf:"bytes,6,opt,name=tls"`
60 60
 }
61 61
 
... ...
@@ -65,10 +90,10 @@ type RouteTargetReference struct {
65 65
 	// The kind of target that the route is referring to. Currently, only 'Service' is allowed
66 66
 	Kind string `json:"kind" protobuf:"bytes,1,opt,name=kind"`
67 67
 
68
-	// Name of the service/target that is being referred to. e.g. name of the service
68
+	// name of the service/target that is being referred to. e.g. name of the service
69 69
 	Name string `json:"name" protobuf:"bytes,2,opt,name=name"`
70 70
 
71
-	// Weight as an integer between 1 and 256 that specifies the target's relative weight
71
+	// weight as an integer between 1 and 256 that specifies the target's relative weight
72 72
 	// against other target reference objects
73 73
 	Weight *int32 `json:"weight" protobuf:"varint,3,opt,name=weight"`
74 74
 }
... ...
@@ -84,13 +109,13 @@ type RoutePort struct {
84 84
 // RouteStatus provides relevant info about the status of a route, including which routers
85 85
 // acknowledge it.
86 86
 type RouteStatus struct {
87
-	// Ingress describes the places where the route may be exposed. The list of
87
+	// ingress describes the places where the route may be exposed. The list of
88 88
 	// ingress points may contain duplicate Host or RouterName values. Routes
89 89
 	// are considered live once they are `Ready`
90 90
 	Ingress []RouteIngress `json:"ingress" protobuf:"bytes,1,rep,name=ingress"`
91 91
 }
92 92
 
93
-// RouteIngress holds information about the places where a route is exposed
93
+// RouteIngress holds information about the places where a route is exposed.
94 94
 type RouteIngress struct {
95 95
 	// Host is the host string under which the route is exposed; this value is required
96 96
 	Host string `json:"host,omitempty" protobuf:"bytes,1,opt,name=host"`
... ...
@@ -110,8 +135,8 @@ const (
110 110
 	// TODO: add other route condition types
111 111
 )
112 112
 
113
-// RouteIngressCondition contains details for the current condition of this pod.
114
-// TODO: add LastTransitionTime, Reason, Message to match NodeCondition api.
113
+// RouteIngressCondition contains details for the current condition of this route on a particular
114
+// router.
115 115
 type RouteIngressCondition struct {
116 116
 	// Type is the type of the condition.
117 117
 	// Currently only Ready.
... ...
@@ -134,35 +159,38 @@ type RouteIngressCondition struct {
134 134
 // Caveat: This is WIP and will likely undergo modifications when sharding
135 135
 //         support is added.
136 136
 type RouterShard struct {
137
-	// ShardName uniquely identifies a router shard in the "set" of
137
+	// shardName uniquely identifies a router shard in the "set" of
138 138
 	// routers used for routing traffic to the services.
139 139
 	ShardName string `json:"shardName" protobuf:"bytes,1,opt,name=shardName"`
140 140
 
141
-	// DNSSuffix for the shard ala: shard-1.v3.openshift.com
141
+	// dnsSuffix for the shard ala: shard-1.v3.openshift.com
142 142
 	DNSSuffix string `json:"dnsSuffix" protobuf:"bytes,2,opt,name=dnsSuffix"`
143 143
 }
144 144
 
145 145
 // TLSConfig defines config used to secure a route and provide termination
146 146
 type TLSConfig struct {
147
-	// Termination indicates termination type.
147
+	// termination indicates termination type.
148 148
 	Termination TLSTerminationType `json:"termination" protobuf:"bytes,1,opt,name=termination,casttype=TLSTerminationType"`
149 149
 
150
-	// Certificate provides certificate contents
150
+	// certificate provides certificate contents
151 151
 	Certificate string `json:"certificate,omitempty" protobuf:"bytes,2,opt,name=certificate"`
152 152
 
153
-	// Key provides key file contents
153
+	// key provides key file contents
154 154
 	Key string `json:"key,omitempty" protobuf:"bytes,3,opt,name=key"`
155 155
 
156
-	// CACertificate provides the cert authority certificate contents
156
+	// caCertificate provides the cert authority certificate contents
157 157
 	CACertificate string `json:"caCertificate,omitempty" protobuf:"bytes,4,opt,name=caCertificate"`
158 158
 
159
-	// DestinationCACertificate provides the contents of the ca certificate of the final destination.  When using reencrypt
159
+	// destinationCACertificate provides the contents of the ca certificate of the final destination.  When using reencrypt
160 160
 	// termination this file should be provided in order to have routers use it for health checks on the secure connection
161 161
 	DestinationCACertificate string `json:"destinationCACertificate,omitempty" protobuf:"bytes,5,opt,name=destinationCACertificate"`
162 162
 
163
-	// InsecureEdgeTerminationPolicy indicates the desired behavior for
164
-	// insecure connections to an edge-terminated route:
165
-	//   disable, allow or redirect
163
+	// insecureEdgeTerminationPolicy indicates the desired behavior for insecure connections to a route. While
164
+	// each router may make its own decisions on which ports to expose, this is normally port 80.
165
+	//
166
+	// * Allow - traffic is sent to the server on the insecure port (default)
167
+	// * Disable - no traffic is allowed on the insecure port.
168
+	// * Redirect - clients are redirected to the secure port.
166 169
 	InsecureEdgeTerminationPolicy InsecureEdgeTerminationPolicyType `json:"insecureEdgeTerminationPolicy,omitempty" protobuf:"bytes,6,opt,name=insecureEdgeTerminationPolicy,casttype=InsecureEdgeTerminationPolicyType"`
167 170
 }
168 171
 
... ...
@@ -31,7 +31,11 @@ message GroupList {
31 31
   repeated Group items = 2;
32 32
 }
33 33
 
34
-// Identity records a successful authentication of a user with an identity provider
34
+// Identity records a successful authentication of a user with an identity provider. The
35
+// information about the source of authentication is stored on the identity, and the identity
36
+// is then associated with a single user object. Multiple identities can reference a single
37
+// user. Information retrieved from the authentication provider is stored in the extra field
38
+// using a schema determined by the provider.
35 39
 message Identity {
36 40
   // Standard object's metadata.
37 41
   optional k8s.io.kubernetes.pkg.api.v1.ObjectMeta metadata = 1;
... ...
@@ -68,7 +72,11 @@ message OptionalNames {
68 68
   repeated string items = 1;
69 69
 }
70 70
 
71
-// User describes someone that makes requests to the API
71
+// Upon log in, every user of the system receives a User and Identity resource. Administrators
72
+// may directly manipulate the attributes of the users for their own tracking, or set groups
73
+// via the API. The user name is unique and is chosen based on the value provided by the
74
+// identity provider - if a user already exists with the incoming name, the user name may have
75
+// a number appended to it depending on the configuration of the system.
72 76
 message User {
73 77
   // Standard object's metadata.
74 78
   optional k8s.io.kubernetes.pkg.api.v1.ObjectMeta metadata = 1;
... ...
@@ -26,7 +26,7 @@ func (GroupList) SwaggerDoc() map[string]string {
26 26
 }
27 27
 
28 28
 var map_Identity = map[string]string{
29
-	"":                 "Identity records a successful authentication of a user with an identity provider",
29
+	"":                 "Identity records a successful authentication of a user with an identity provider. The information about the source of authentication is stored on the identity, and the identity is then associated with a single user object. Multiple identities can reference a single user. Information retrieved from the authentication provider is stored in the extra field using a schema determined by the provider.",
30 30
 	"metadata":         "Standard object's metadata.",
31 31
 	"providerName":     "ProviderName is the source of identity information",
32 32
 	"providerUserName": "ProviderUserName uniquely represents this identity in the scope of the provider",
... ...
@@ -49,7 +49,7 @@ func (IdentityList) SwaggerDoc() map[string]string {
49 49
 }
50 50
 
51 51
 var map_User = map[string]string{
52
-	"":           "User describes someone that makes requests to the API",
52
+	"":           "Upon log in, every user of the system receives a User and Identity resource. Administrators may directly manipulate the attributes of the users for their own tracking, or set groups via the API. The user name is unique and is chosen based on the value provided by the identity provider - if a user already exists with the incoming name, the user name may have a number appended to it depending on the configuration of the system.",
53 53
 	"metadata":   "Standard object's metadata.",
54 54
 	"fullName":   "FullName is the full name of user",
55 55
 	"identities": "Identities are the identities associated with this user",
... ...
@@ -7,12 +7,13 @@ import (
7 7
 	kapi "k8s.io/kubernetes/pkg/api/v1"
8 8
 )
9 9
 
10
-// Auth system gets identity name and provider
11
-// POST to UserIdentityMapping, get back error or a filled out UserIdentityMapping object
12
-
13 10
 // +genclient=true
14 11
 
15
-// User describes someone that makes requests to the API
12
+// Upon log in, every user of the system receives a User and Identity resource. Administrators
13
+// may directly manipulate the attributes of the users for their own tracking, or set groups
14
+// via the API. The user name is unique and is chosen based on the value provided by the
15
+// identity provider - if a user already exists with the incoming name, the user name may have
16
+// a number appended to it depending on the configuration of the system.
16 17
 type User struct {
17 18
 	unversioned.TypeMeta `json:",inline"`
18 19
 	// Standard object's metadata.
... ...
@@ -37,7 +38,11 @@ type UserList struct {
37 37
 	Items []User `json:"items" protobuf:"bytes,2,rep,name=items"`
38 38
 }
39 39
 
40
-// Identity records a successful authentication of a user with an identity provider
40
+// Identity records a successful authentication of a user with an identity provider. The
41
+// information about the source of authentication is stored on the identity, and the identity
42
+// is then associated with a single user object. Multiple identities can reference a single
43
+// user. Information retrieved from the authentication provider is stored in the extra field
44
+// using a schema determined by the provider.
41 45
 type Identity struct {
42 46
 	unversioned.TypeMeta `json:",inline"`
43 47
 	// Standard object's metadata.