docs/routing.md
689c5977
 ## Description
 
 The `openshift/origin-haproxy-router` is an [HAProxy](http://www.haproxy.org/) router that is used as an external to internal
3dbf26a7
 interface to OpenShift [services](https://github.com/kubernetes/kubernetes/blob/master/docs/user-guide/services.md).
689c5977
 
57610303
 The router is meant to run as a pod.  When running the router you must
 ensure that the router can use ports 80 and 443 on the host (minion) in
 order to forward traffic.  In a deployed environment the router minion should
 also have external ip addresses that can be exposed for DNS based routing.
689c5977
 
 ## Creating Routes
 
b82edcee
 When you create a route you specify the `hostname` and `service` that the route is connecting.  The `hostname` is the
 web host that the router will use to direct traffic.  This host name should be a domain name that you
 already own, for instance `www.example.com`.   Alternatively, you may leave the host name
 blank and a system generated host name will be created.  It is important to note that at this point
 DNS resolution of host names is external to the OpenShift system.
689c5977
 
 
 ## Running the router
 
 
 ### In the Vagrant environment
 
 Please note, that starting the router in the vagrant environment requires it to be pulled into docker.  This may take some time.
 Once it is pulled it will start and be visible in the `docker ps` list of containers and your pod will be marked as running.
 
a2079c52
 A router requires a service account that has access to a security context constraint which allows host ports.
 To create this service account:
 
     $ echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | oc create -f -
3dbf26a7
 
a2079c52
     # You may either create a new SCC or use an existing SCC.  The following command will
57610303
     # display existing SCCs and if they support host network and host ports.
     $ oc get scc -t "{{range .items}}{{.metadata.name}}: n={{.allowHostNetwork}},p={{.allowHostPorts}}; {{end}}"
     privileged: n=true,p=true; restricted: n=false,p=false;
a2079c52
 
     $ oc edit scc <name>
     ... add the service account in the form of system:serviceaccount:<namespace>:<name> ...
 
689c5977
 
 #### Single machine vagrant environment
bf5d1972
 
689c5977
     $ vagrant up
     $ vagrant ssh
     [vagrant@openshiftdev origin]$ cd /data/src/github.com/openshift/origin/
     [vagrant@openshiftdev origin]$ make clean && make
     [vagrant@openshiftdev origin]$ export PATH=/data/src/github.com/openshift/origin/_output/local/bin/linux/amd64:$PATH
     [vagrant@openshiftdev origin]$ sudo /data/src/github.com/openshift/origin/_output/local/bin/linux/amd64/openshift start &
f21c42bf
 
432e76ef
     If running in https mode, ensure oc can authenticate to the master
d46b3af9
     [vagrant@openshiftdev origin]$ export KUBECONFIG=/data/src/github.com/openshift/origin/openshift.local.config/master/admin.kubeconfig
     [vagrant@openshiftdev origin]$ sudo chmod a+r "$KUBECONFIG"
40be29be
     [vagrant@openshiftdev origin]$ sudo chmod a+r openshift.local.config/master/openshift-router.kubeconfig
57610303
     [vagrant@openshiftdev origin]$ oadm router --credentials="openshift.local.config/master/openshift-router.kubeconfig" --service-account=router
432e76ef
     [vagrant@openshiftdev origin]$ oc get pods
bf5d1972
 
 #### Clustered vagrant environment
689c5977
 
 
     $ export OPENSHIFT_DEV_CLUSTER=true
     $ vagrant up
     $ vagrant ssh master
57610303
     [vagrant@openshift-master ~]$ oadm router --credentials="${KUBECONFIG}" --service-account=router
bf5d1972
 
689c5977
 
 
 ### In a deployed environment
 
 In order to run the router in a deployed environment the following conditions must be met:
 
 * The machine the router will run on must be provisioned as a minion in the cluster (for networking configuration)
 * The machine may or may not be registered with the master.  Optimally it will not serve pods while also serving as the router
57610303
 * The machine must not have services running on it that bind to host ports 80 and 443 since this is what the router uses for traffic
689c5977
 
1ab326d9
 To install the router pod you use the `oadm router` command line, passing the flag `--credentials=<kubeconfig_file>`.
fbfd1713
 The credentials flag controls the identity that the router will use to talk to the master (and the address of the master) so in most
1ab326d9
 environments you can use the `${CONFIG_DIR}/master/openshift-router.kubeconfig` file. Once you run this command you can check the configuration
432e76ef
 of the router by running `oc get dc router` to check the deployment status.
fbfd1713
 
432e76ef
 `oadm router` offers other options for deploying routers - run `oadm router --help` for more details.
689c5977
 
fbfd1713
 ### Manually
689c5977
 
 To run the router manually (outside of a pod) you should first build the images with instructions found below.  Then you
57610303
 can run the router anywhere that it can access both the pods and the master.
 The router can use either the host or the container network stack and
 binds/exposes ports 80 and 443, which allows the router to be used by a DNS
 server for incoming traffic.
 This means that the host where the router is run, must not have any other
 services that are bound to those ports (80 and 443).
689c5977
 
 
57610303
 #### Example using the host network
 	$ docker run --rm -it --net=host openshift/origin-haproxy-router --master $kube-master-url
 
 #### Example using the container network and exposing ports 80 and 443.
 	$ docker run --rm -it -p 80:80 -p 443:443 openshift/origin-haproxy-router --master $kube-master-url
689c5977
 
f21c42bf
 example of kube-master-url : https://10.0.2.15:8443
689c5977
 
57610303
 
689c5977
 ## Monitoring the router
 
 Since the router runs as a docker container you use the `docker logs <id>` command to monitor the router.
 
 ## Testing your route
 
 To test your route independent of DNS you can send a host header to the router.  The following is an example.
 
     $ ..... vagrant up with single machine instructions .......
     $ ..... create config files listed below in ~ ........
432e76ef
     [vagrant@openshiftdev origin]$ oc create -f ~/pod.json
     [vagrant@openshiftdev origin]$ oc create -f ~/service.json
     [vagrant@openshiftdev origin]$ oc create -f ~/route.json
689c5977
     [vagrant@openshiftdev origin]$ curl -H "Host:hello-openshift.v3.rhcloud.com" <vm ip>
     Hello OpenShift!
bf5d1972
 
689c5977
     $ ..... vagrant up with cluster instructions .....
     $ ..... create config files listed below in ~ ........
432e76ef
     [vagrant@openshift-master ~]$ oc create -f ~/pod.json
     [vagrant@openshift-master ~]$ oc create -f ~/service.json
     [vagrant@openshift-master ~]$ oc create -f ~/route.json
689c5977
     # take note of what minion number the router is deployed on
432e76ef
     [vagrant@openshift-master ~]$ oc get pods
689c5977
     [vagrant@openshift-master ~]$ curl -H "Host:hello-openshift.v3.rhcloud.com" openshift-minion-<1,2>
     Hello OpenShift!
 
bf5d1972
 
 
689c5977
 
 Configuration files (to be created in the vagrant home directory)
 
 pod.json
 
     {
2225de6f
       "kind": "Pod",
95b5e27a
       "apiVersion": "v1beta3",
       "metadata": {
         "name": "hello-pod",
         "labels": {
           "name": "hello-openshift"
689c5977
         }
2225de6f
       },
95b5e27a
       "spec": {
         "containers": [
           {
             "name": "hello-openshift",
             "image": "openshift/hello-openshift",
             "ports": [
               {
                 "containerPort": 8080,
                 "protocol": "TCP"
               }
             ],
             "resources": {},
             "terminationMessagePath": "/dev/termination-log",
             "imagePullPolicy": "IfNotPresent",
             "capabilities": {},
             "securityContext": {
               "capabilities": {},
               "privileged": false
             }
           }
         ],
         "restartPolicy": "Always",
         "dnsPolicy": "ClusterFirst"
2225de6f
       }
     }
689c5977
 
 service.json
 
     {
       "kind": "Service",
95b5e27a
       "apiVersion": "v1beta3",
       "metadata": {
689c5977
         "name": "hello-openshift"
95b5e27a
       },
       "spec": {
         "ports": [
           {
             "protocol": "TCP",
             "port": 27017,
             "targetPort": 0,
             "nodePort": 0
           }
         ],
         "selector": {
           "name": "hello-openshift"
         },
         "portalIP": "",
         "type": "ClusterIP",
         "sessionAffinity": "None"
2225de6f
       }
bf5d1972
     }
3dbf26a7
 
689c5977
 route.json
 
     {
       "kind": "Route",
95b5e27a
       "apiVersion": "v1beta3",
211d6b81
       "metadata": {
         "name": "hello-route"
95b5e27a
       },
       "spec": {
         "host": "hello-openshift.v3.rhcloud.com",
         "to": {
           "kind": "Service",
           "name": "hello-openshift"
         }
211d6b81
       }
689c5977
     }
bf5d1972
 
dd0aaa12
 ## Securing Your Routes
 
 Creating a secure route to your pods can be accomplished by specifying the TLS Termination of the route and, optionally,
 providing certificates to use.  As of writing, OpenShift beta1 TLS termination relies on SNI for serving custom certificates.
 In the future, the ability to create custom frontends within the router will allow all traffic to serve custom certificates.
 
 TLS Termination falls in the following configuration buckets:
 
 #### Edge Termination
 Edge termination means that TLS termination occurs prior to traffic reaching the destination.  TLS certificates are served
 by the frontend of the router.
 
 Edge termination is configured by setting `TLS.Termination` to `edge` on your `route` and by specifying the `CertificateFile`
 and `KeyFile` (at a minimum).  You may also specify your `CACertificateFile` to complete the entire certificate chain.
 
 #### Passthrough Termination
 Passthrough termination is a mechanism to send encrypted traffic straight to the destination without the router providing
3dbf26a7
 TLS termination.
dd0aaa12
 
 Passthrough termination is configured by setting `TLS.Termination` to `passthrough` on your `route`.  No other information is required.
3dbf26a7
 The destination (such as an Nginx, Apache, or another HAProxy instance) will be responsible for serving certificates for
dd0aaa12
 the traffic.
 
 #### Re-encryption Termination
3dbf26a7
 Re-encryption is a special case of edge termination where the traffic is first decrypted with certificate A and then
dd0aaa12
 re-encrypted with certificate B when sending the traffic to the destination.
3dbf26a7
 
dd0aaa12
 Re-encryption termination is configured by setting `TLS.Termination` to `reencrypt` and providing the `CertificateFile`,
 `KeyFile`, the `CACertificateFile`, and a `DestinationCACertificateFile`.  The edge certificates remain the same as in the edge
3dbf26a7
 termination use case.  The `DestinationCACertificateFile` is used in order to validate the secure connection from the
dd0aaa12
 router to the destination.
 
 ### Special Notes About Secure Routes
3dbf26a7
 At this point, password protected key files are not supported.  HAProxy prompts you for a password when starting up and
 does not have a way to automate this process.  We will need a follow up for `KeyPassPhrase`.  To remove a passphrase from
dd0aaa12
 a keyfile you may run `openssl rsa -in passwordProtectedKey.key -out new.key`
 
689c5977
 ## Running HA Routers
 
 Highly available router setups can be accomplished by running multiple instances of the router pod and fronting them with
 a balancing tier.  This could be something as simple as DNS round robin or as complex as multiple load balancing layers.
 
 ### DNS Round Robin
 
 As a simple example, you may create a zone file for a DNS server like [BIND](http://www.isc.org/downloads/bind/) that maps
 multiple A records for a single domain name.  When clients do a lookup they will be given one of the many records, in order
 as a round robin scheme.  The files below illustrate an example of using wild card DNS with multiple A records to achieve
 the desired round robin.  The wild card could be further distributed into shards with `*.<shard>`.  Finally, a test using
3dbf26a7
 `dig` (available in the `bind-utils` package) is shown from the vagrant environment that shows multiple answers for the
689c5977
 same lookup.  Doing multiple pings show the resolution swapping between IP addresses.
 
 #### named.conf - add a new zone that points to your file
     zone "v3.rhcloud.com" IN {
             type master;
             file "v3.rhcloud.com.zone";
     };
bf5d1972
 
689c5977
 
 #### v3.rhcloud.com.zone - contains the round robin mappings for the DNS lookup
     $ORIGIN v3.rhcloud.com.
bf5d1972
 
689c5977
     @       IN      SOA     . v3.rhcloud.com. (
                          2009092001         ; Serial
                              604800         ; Refresh
                               86400         ; Retry
                             1206900         ; Expire
                                 300 )       ; Negative Cache TTL
             IN      NS      ns1.v3.rhcloud.com.
     ns1     IN      A       127.0.0.1
     *       IN      A       10.245.2.2
             IN      A       10.245.2.3
bf5d1972
 
689c5977
 
 #### Testing the entry
bf5d1972
 
 
689c5977
     [vagrant@openshift-master ~]$ dig hello-openshift.shard1.v3.rhcloud.com
bf5d1972
 
689c5977
     ; <<>> DiG 9.9.4-P2-RedHat-9.9.4-16.P2.fc20 <<>> hello-openshift.shard1.v3.rhcloud.com
     ;; global options: +cmd
     ;; Got answer:
     ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 36389
     ;; flags: qr aa rd; QUERY: 1, ANSWER: 2, AUTHORITY: 1, ADDITIONAL: 2
     ;; WARNING: recursion requested but not available
bf5d1972
 
689c5977
     ;; OPT PSEUDOSECTION:
     ; EDNS: version: 0, flags:; udp: 4096
     ;; QUESTION SECTION:
     ;hello-openshift.shard1.v3.rhcloud.com. IN A
bf5d1972
 
689c5977
     ;; ANSWER SECTION:
     hello-openshift.shard1.v3.rhcloud.com. 300 IN A	10.245.2.2
     hello-openshift.shard1.v3.rhcloud.com. 300 IN A	10.245.2.3
bf5d1972
 
689c5977
     ;; AUTHORITY SECTION:
     v3.rhcloud.com.		300	IN	NS	ns1.v3.rhcloud.com.
bf5d1972
 
689c5977
     ;; ADDITIONAL SECTION:
     ns1.v3.rhcloud.com.	300	IN	A	127.0.0.1
bf5d1972
 
689c5977
     ;; Query time: 5 msec
     ;; SERVER: 10.245.2.3#53(10.245.2.3)
     ;; WHEN: Wed Nov 19 19:01:32 UTC 2014
     ;; MSG SIZE  rcvd: 132
bf5d1972
 
689c5977
     [vagrant@openshift-master ~]$ ping hello-openshift.shard1.v3.rhcloud.com
     PING hello-openshift.shard1.v3.rhcloud.com (10.245.2.3) 56(84) bytes of data.
     ...
     ^C
     --- hello-openshift.shard1.v3.rhcloud.com ping statistics ---
     2 packets transmitted, 2 received, 0% packet loss, time 1000ms
     rtt min/avg/max/mdev = 0.272/0.573/0.874/0.301 ms
     [vagrant@openshift-master ~]$ ping hello-openshift.shard1.v3.rhcloud.com
     ...
 
 
 
 ## Dev - Building the haproxy router image
 
 When building the routes you use the scripts in the `${OPENSHIFT ORIGIN PROJECT}/hack` directory.  This will build both
 base images and the router image.  When complete you should have a `openshift/origin-haproxy-router` container that shows
 in `docker images` that is ready to use.
 
 	$ hack/build-base-images.sh
     $ hack/build-images.sh
bf5d1972
 
689c5977
 ## Dev - router internals
 
3dbf26a7
 The router is an [HAProxy](http://www.haproxy.org/) container that is run via a go wrapper (`openshift-router.go`) that
 provides a watch on `routes` and `endpoints`.  The watch funnels down to the configuration files for the [HAProxy](http://www.haproxy.org/)
689c5977
 plugin which can be found in `plugins/router/haproxy/haproxy.go`.  The router is then issued a reload command.
 
3dbf26a7
 When debugging the router it is sometimes useful to inspect these files.  To do this you must enter the namespace of the
57610303
 running container by getting the pid and using nsenter
 `nsenter -m -u -n -i -p -t $(docker inspect --format "{{.State.Pid }}" <container-id>)`
689c5977
 Listed below are the files used for configuration.
 
57610303
     ConfigTemplate         = "/var/lib/haproxy/conf/haproxy_template.conf"
     ConfigFile             = "/var/lib/haproxy/conf/haproxy.config"
     HostMapFile            = "/var/lib/haproxy/conf/os_http_be.map"
     EdgeHostMapFile        = "/var/lib/haproxy/conf/os_edge_http_be.map"
     SniPassThruHostMapFile = "/var/lib/haproxy/conf/os_sni_passthrough.map"
     ReencryptHostMapFile   = "/var/lib/haproxy/conf/os_reencrypt.map"
     TcpHostMapFile         = "/var/lib/haproxy/conf/os_tcp_be.map"