Now that you have a (hopefully) functional OpenStack Cloud, you need to do a couple of things to get your cloud fully operational and usable for applications like Atmosphere.
In the event that all of your galera containers are shut down at the same time, the galera cluster will break and you may need to restore from a backup. Take an initial dump of your database in case this happens. From inside a galera container:
mysqldump --opt --events --all-databases > openstack-YYYYMMDD.sql, and store it in a safe place.
If using Cisco gear with this deployment setup, you must configure an IGMP snooping querier on the same subnet as the tunneling subnet and VLAN.
For example, if using the default range of IP addresses defined in the OpenStack Ansible deployment, i.e. tunnel: 172.29.240.0/22, configure an IGMP snooping querier within that range on the VLAN used for the tunnelling network, or Multicast traffic for HA L3 Neutron agents will not work. It is suggested to set your IGMP Snooping Querier IP to 172.29.243.254 (if using the above tunnel block of IP addresses).
interface Vlan 102
name tunnel
ip address 172.29.243.254/22
ip igmp snooping querier
no shutdown
!
ip igmp snooping enable
This snippet does not cover adding tagged/untagged trunk ports to the VLAN interface, which you must do specific to your deployment.
SSH to any of the infrastructure hosts. Then, attach one of the utility containers and source the openrc file:
lxc-attach -n name-of-your-utility-container
source /root/openrc
Then, you can use the OpenStack command-line interface -- here's a cheat cheet.
See Verify operation of Glance.
For the lazy, here are commands to set up Ubuntu 16.04 and CentOS 7 images:
wget https://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
openstack image create "ubuntu-16.04" --file xenial-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare --public
wget http://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img
openstack image create "ubuntu-14.04" --file trusty-server-cloudimg-amd64-disk1.img --disk-format qcow2 --container-format bare --public
wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2
openstack image create "centos-7" --file CentOS-7-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --public
wget http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud.qcow2
openstack image create "centos-6" --file CentOS-6-x86_64-GenericCloud.qcow2 --disk-format qcow2 --container-format bare --public
OpenStack should now report all of the images you just created:
root@infra1_utility_container-<ID>:~# openstack image list
+--------------------------------------+--------------+
| ID | Name |
+--------------------------------------+--------------+
| random-id-1-ahgh8caetha9sahc9bu5OJ6g | centos-7 |
| random-id-2-ahgh8caetha9sahc9bu5OJ6g | cirros |
| random-id-3-ahgh8caetha9sahc9bu5OJ6g | ubuntu-14.04 |
+--------------------------------------+--------------+
Neutron is configured in OSA to use a HA VRRP L3 agent implemented using Linuxbridge. In order to get neutron working for instance launches so that one can login via a public IP address, some networking is required to get things running.
- First one must create a flat network, which will allow OpenStack Neutron to use actual network address space on a public or private network in a datacenter or test environment
- Once a flat network is created, one must assign a list of non-DHCP assigned addresses that can be used for
floating-ipaddresses. - One must then create a router to route public/private datacenter traffic with the internet as well as OpenStack non-routable private IP address space for OpenStack tenant networks.
- Lastly, be sure to configure the OpenStack
security-groupsto allow access to instances from the assignedfloating-ipaddress.
Below are steps to create networks as described above:
neutron net-create --provider:physical_network=flat --provider:network_type=flat --router:external=true --shared ext-net
# Fill in these with your external (public) IP space
LOW="3";HIGH="254";ROUTER="1";NETWORK="192.168.1";CIDR=".0/24";DNS="8.8.8.8"
neutron subnet-create --name ext-net --allocation-pool start=${NETWORK}.${LOW},end=${NETWORK}.${HIGH} --dns-nameserver ${DNS} --gateway ${NETWORK}.${ROUTER} ext-net ${NETWORK}${CIDR} --enable_dhcp=False
neutron router-create public_router
neutron router-gateway-set public_router ext-net
neutron net-create selfservice
neutron subnet-create --name selfservice --dns-nameserver ${DNS} --gateway 172.16.1.1 selfservice 172.16.1.0/24
neutron router-interface-add public_router selfservice
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
For more information on how VRRP works, see these documents:
- http://docs.openstack.org/newton/networking-guide/scenario-l3ha-lb.html
- https://wiki.openstack.org/wiki/Neutron/L3_High_Availability_VRRP
For steps on how to troubleshoot OpenStack Neutron Networking, see these pages:
openstack flavor create --ram 2048 --disk 8 --vcpus 1 --public tiny
openstack flavor create --ram 4096 --disk 8 --vcpus 2 --public small
openstack flavor create --ram 16384 --disk 8 --vcpus 6 --public medium
openstack flavor create --ram 30720 --disk 8 --vcpus 10 --public large
openstack flavor create --ram 61440 --disk 8 --vcpus 24 --public xlarge
openstack flavor create --ram 122880 --disk 8 --vcpus 44 --public xxlarge
-
Login to OpenStack
HorizonWeb-interface defined in the variableexternal_lb_vip_addressin theopenstack_user_config.ymlfile under the section calledglobal_overrides -
Grab the login password from the
user_secrets.ymlfile listed underkeystone_auth_admin_password. Login usingadminand thekeystone_auth_admin_password. -
Select
Project,Compute/Instances -
Select
Launch Instance -
Fill out the following settings:
# Details Availability Zone: nova Instance Name: <your-instance-name-here> Flavor: <use the flavor you created, m1.medium for non-cirros images, m1.small for cirros> Instance Count: 1 Instance Boot Source: Boot from image Image Name: <glance image to boot> # Access & Security Click + to add your id_rsa.pub key Security Groups: Select default # Networking Drag "selfservice" network into "Selected networks" Launch # Add Floating IP address Click down-arrow under "Actions" for that launched instance, and select "Associate Floating IP" Click "+" to create a Floating IP allocation Pool: ext-net Allocate IP Associate
-
Verify that instance shows an "Active" status, if not, check all OpenStack Logs by logging into the
Infrastructure Logging Hostand check for errors, like so:cd /openstack/log1_rsyslog_container-<container-id>/log-storage tail -n 1000 -f */*.log | grep ERROR
Once one has verfied that OpenStack is working well with one node, one might need to add additional compute nodes to handle the load from a large amount of users.
In order to do this, one will need to re-run all steps defined on the main README.md page while --limit "<compute-bare-metal-2>,<compute-bare-metal-3>,<compute-bare-metal-4>" at every step.
Once finished, update the openstack_user_config.yml to include the new used_ips for those hosts, and add them individually to the compute_hosts.
Then run:
openstack-ansible setup-everything.yml --limit "<compute2>,<compute3>,<compute4>"
For more information, see how to do this here: http://docs.openstack.org/developer/openstack-ansible/newton/developer-docs/ops-add-computehost.html
-
If
Horizonreports a failure on instance launch because it cannot select a Hypervisor (or says there is not available), check the logs on the for the following message:Image <glance-image-id> could not be found.This means that HAProxy sent a request to a
Glancecontainer which did not have the image requested.
Until something like glance-irods is installed and configured, one might want to disable HAProxy for Glance to only select the node that has all of the images.
To fix, this do the following:
Login to all `Infrastructure Control Plane Hosts` at once using broadcast input with your favorite terminal, or `tmux` with `setw syncronize-panes on` modify the HAProxy configuration.
```
cd /etc/haproxy/conf.d/
lxc-attach -n infra<LITERAL-TAB>_glance_container-<LITERAL-TAB>
cd /var/lib/glance/images/
ls -la
# Identify the container that contains all the Glance images in it, and take note of this container
exit
vim glance_registry
# Modify the section labeled: "backend glance_registry-back" and comment out the two containers that DO NOT have the correct Glance images on them.
# E.g. Where Glance "infra3" contained all Glance images
backend glance_registry-back
mode http
balance leastconn
#server infra1_glance_container-<id> 172.29.239.<ip1>:9191 check port 9191 inter 12000 rise 3 fall 3
#server infra2_glance_container-<id> 172.29.238.<ip2>:9191 check port 9191 inter 12000 rise 3 fall 3
server infra3_glance_container-<id> 172.29.239.<ip3>:9191 check port 9191 inter 12000 rise 3 fall 3
```
If all of the Infrastructure Control Plane Hosts go down or reboot at once, this will have disastrous affect on Galera MySQL Cluster, as it will likely not recover, and have split-brain issues. See Galera cluster recovery for more information.
http://docs.openstack.org/liberty/config-reference/content/firewalls-default-ports.html
At this point, all firewalls will be down, so one will need to be sure to configure ufw or iptables on all the hosts.
http://docs.openstack.org/developer/openstack-ansible/