Browse code

Update the devstack LBaaS guide for Octavia

The existing devstack guide for load balancing is out of date.
This patch updates the guide to reflect the current way to install
devstack with the Octavia plugin(s).

Change-Id: Id48b70b50e44ec7b965d969b2d93f77543d7364c

Michael Johnson authored on 2019/01/25 03:49:35
Showing 1 changed files
... ...
@@ -1,39 +1,54 @@
1
-Configure Load-Balancer Version 2
2
-=================================
1
+Devstack with Octavia Load Balancing
2
+====================================
3 3
 
4
-Starting in the OpenStack Liberty release, the
5
-`neutron LBaaS v2 API <https://developer.openstack.org/api-ref/network/v2/index.html>`_
6
-is now stable while the LBaaS v1 API has been deprecated.  The LBaaS v2 reference
7
-driver is based on Octavia.
4
+Starting with the OpenStack Pike release, Octavia is now a standalone service
5
+providing load balancing services for OpenStack.
8 6
 
7
+This guide will show you how to create a devstack with `Octavia API`_ enabled.
8
+
9
+.. _Octavia API: https://developer.openstack.org/api-ref/load-balancer/v2/index.html
9 10
 
10 11
 Phase 1: Create DevStack + 2 nova instances
11 12
 --------------------------------------------
12 13
 
13 14
 First, set up a vm of your choice with at least 8 GB RAM and 16 GB disk space,
14
-make sure it is updated. Install git and any other developer tools you find useful.
15
+make sure it is updated. Install git and any other developer tools you find
16
+useful.
15 17
 
16 18
 Install devstack
17 19
 
18 20
 ::
19 21
 
20 22
     git clone https://git.openstack.org/openstack-dev/devstack
21
-    cd devstack
23
+    cd devstack/tools
24
+    sudo ./create-stack-user.sh
25
+    cd ../..
26
+    sudo mv devstack /opt/stack
27
+    sudo chown -R stack.stack /opt/stack/devstack
22 28
 
29
+This will clone the current devstack code locally, then setup the "stack"
30
+account that devstack services will run under. Finally, it will move devstack
31
+into its default location in /opt/stack/devstack.
23 32
 
24
-Edit your ``local.conf`` to look like
33
+Edit your ``/opt/stack/devstack/local.conf`` to look like
25 34
 
26 35
 ::
27 36
 
28 37
     [[local|localrc]]
29
-    # Load the external LBaaS plugin.
30
-    enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas
31 38
     enable_plugin octavia https://git.openstack.org/openstack/octavia
39
+    # If you are enabling horizon, include the octavia dashboard
40
+    # enable_plugin octavia-dashboard https://git.openstack.org/openstack/octavia-dashboard.git
41
+    # If you are enabling barbican for TLS offload in Octavia, include it here.
42
+    # enable_plugin barbican https://github.com/openstack/barbican.git
43
+
44
+    # If you have python3 available:
45
+    # USE_PYTHON3=True
32 46
 
33 47
     # ===== BEGIN localrc =====
34 48
     DATABASE_PASSWORD=password
35 49
     ADMIN_PASSWORD=password
36 50
     SERVICE_PASSWORD=password
51
+    SERVICE_TOKEN=password
37 52
     RABBIT_PASSWORD=password
38 53
     # Enable Logging
39 54
     LOGFILE=$DEST/logs/stack.sh.log
... ...
@@ -41,27 +56,30 @@ Edit your ``local.conf`` to look like
41 41
     LOG_COLOR=True
42 42
     # Pre-requisite
43 43
     ENABLED_SERVICES=rabbit,mysql,key
44
-    # Horizon
45
-    ENABLED_SERVICES+=,horizon
44
+    # Horizon - enable for the OpenStack web GUI
45
+    # ENABLED_SERVICES+=,horizon
46 46
     # Nova
47
-    ENABLED_SERVICES+=,n-api,n-cpu,n-cond,n-sch
47
+    ENABLED_SERVICES+=,n-api,n-crt,n-obj,n-cpu,n-cond,n-sch,n-api-meta,n-sproxy
48
+    ENABLED_SERVICES+=,placement-api,placement-client
48 49
     # Glance
49 50
     ENABLED_SERVICES+=,g-api,g-reg
50 51
     # Neutron
51
-    ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta
52
-    # Enable LBaaS v2
53
-    ENABLED_SERVICES+=,q-lbaasv2
52
+    ENABLED_SERVICES+=,q-svc,q-agt,q-dhcp,q-l3,q-meta,neutron
54 53
     ENABLED_SERVICES+=,octavia,o-cw,o-hk,o-hm,o-api
55 54
     # Cinder
56 55
     ENABLED_SERVICES+=,c-api,c-vol,c-sch
57 56
     # Tempest
58 57
     ENABLED_SERVICES+=,tempest
58
+    # Barbican - Optionally used for TLS offload in Octavia
59
+    # ENABLED_SERVICES+=,barbican
59 60
     # ===== END localrc =====
60 61
 
61 62
 Run stack.sh and do some sanity checks
62 63
 
63 64
 ::
64 65
 
66
+    sudo su - stack
67
+    cd /opt/stack/devstack
65 68
     ./stack.sh
66 69
     . ./openrc
67 70
 
... ...
@@ -72,38 +90,59 @@ Create two nova instances that we can use as test http servers:
72 72
 ::
73 73
 
74 74
     #create nova instances on private network
75
-    nova boot --image $(nova image-list | awk '/ cirros-.*-x86_64-uec / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node1
76
-    nova boot --image $(nova image-list | awk '/ cirros-.*-x86_64-uec / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node2
77
-    nova list # should show the nova instances just created
75
+    openstack server create --image $(openstack image list | awk '/ cirros-.*-x86_64-.* / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node1
76
+    openstack server creeate --image $(openstack image list | awk '/ cirros-.*-x86_64-.* / {print $2}') --flavor 1 --nic net-id=$(openstack network list | awk '/ private / {print $2}') node2
77
+    openstack server list # should show the nova instances just created
78 78
 
79 79
     #add secgroup rules to allow ssh etc..
80 80
     openstack security group rule create default --protocol icmp
81 81
     openstack security group rule create default --protocol tcp --dst-port 22:22
82 82
     openstack security group rule create default --protocol tcp --dst-port 80:80
83 83
 
84
-Set up a simple web server on each of these instances. ssh into each instance (username 'cirros', password 'cubswin:)') and run
84
+Set up a simple web server on each of these instances. ssh into each instance (username 'cirros', password 'cubswin:)' or 'gocubsgo') and run
85 85
 
86 86
 ::
87 87
 
88 88
     MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}')
89 89
     while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
90 90
 
91
-Phase 2: Create your load balancers
91
+Phase 2: Create your load balancer
92
+----------------------------------
93
+
94
+Make sure you have the 'openstack loadbalancer' commands:
92 95
 
93 96
 ::
94 97
 
95
-    neutron lbaas-loadbalancer-create --name lb1 private-subnet
96
-    neutron lbaas-loadbalancer-show lb1  # Wait for the provisioning_status to be ACTIVE.
97
-    neutron lbaas-listener-create --loadbalancer lb1 --protocol HTTP --protocol-port 80 --name listener1
98
-    sleep 10  # Sleep since LBaaS actions can take a few seconds depending on the environment.
99
-    neutron lbaas-pool-create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
100
-    sleep 10
101
-    neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.3 --protocol-port 80 pool1
102
-    sleep 10
103
-    neutron lbaas-member-create  --subnet private-subnet --address 10.0.0.5 --protocol-port 80 pool1
104
-
105
-Please note here that the "10.0.0.3" and "10.0.0.5" in the above commands are the IPs of the nodes
106
-(in my test run-thru, they were actually 10.2 and 10.4), and the address of the created LB will be
107
-reported as "vip_address" from the lbaas-loadbalancer-create, and a quick test of that LB is
108
-"curl that-lb-ip", which should alternate between showing the IPs of the two nodes.
98
+    pip install python-octaviaclient
99
+
100
+Create your load balancer:
101
+
102
+::
103
+
104
+    openstack loadbalancer create --name lb1 --vip-subnet-id private-subnet
105
+    openstack loadbalancer show lb1  # Wait for the provisioning_status to be ACTIVE.
106
+    openstack loadbalancer listener create --protocol HTTP --protocol-port 80 --name listener1 lb1
107
+    openstack loadbalancer show lb1  # Wait for the provisioning_status to be ACTIVE.
108
+    openstack loadbalancer pool create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1
109
+    openstack loadbalancer show lb1  # Wait for the provisioning_status to be ACTIVE.
110
+    openstack loadbalancer healthmonitor create --delay 5 --timeout 2 --max-retries 1 --type HTTP pool1
111
+    openstack loadbalancer show lb1  # Wait for the provisioning_status to be ACTIVE.
112
+    openstack loadbalancer member create --subnet-id private-subnet --address <web server 1 address> --protocol-port 80 pool1
113
+    openstack loadbalancer show lb1  # Wait for the provisioning_status to be ACTIVE.
114
+    openstack loadbalancer member create --subnet-id private-subnet --address <web server 2 address> --protocol-port 80 pool1
115
+
116
+Please note: The <web server # address> fields are the IP addresses of the nova
117
+servers created in Phase 1.
118
+Also note, using the API directly you can do all of the above commands in one
119
+API call.
120
+
121
+Phase 3: Test your load balancer
122
+--------------------------------
123
+
124
+::
125
+
126
+    openstack loadbalancer show lb1 # Note the vip_address
127
+    curl http://<vip_address>
128
+    curl http://<vip_address>
129
+
130
+This should show the "Welcome to <IP>" message from each member server.