StackIQ has officially partnered with Red Hat to simplify the process of deploying Red Hat's OpenStack Platform (RHEL-OSP). StackIQ Cluster Manager is ideal for deploying the hardware infrastructure and application stacks of heterogeneous data center environments. With Red Hat's OpenStack offering, StackIQ Cluster Manager handles the automatic bare-metal installation of disparate hardware and correct configuration of the multiple networks required by OpenStack. StackIQ Cluster Manager also automatically deploys Red Hat Foreman which enables web-based driven cluster configuration on deployed nodes of OpenStack software. All OpenStack services are available for management and deployment via the OpenStack Dashboard, including Nova, Neutron, Cinder, Swift, etc. StackIQ Cluster Manager enables the ongoing management, deployment, and integration of Red Hat OpenStack services on growing and multi-use clusters.
StackIQ takes a “software defined infrastructure” approach to provision and manage cluster infrastructure that sits below applications like OpenStack and Hadoop. In this post, we’ll discuss how this is done, followed by a step-by-step guide to installing RHEL Foreman/OpenStack with StackIQ’s management system
Step 1. Install StackIQ Cluster Manager
The StackIQ Cluster Manager node is installed from bare-metal (i.e., there is no pre-requisite software and no operating system previously installed) by burning the StackIQ Cluster Core Roll ISO to DVD and booting from it (the StackIQ Cluster Core Roll can be obtained from the “Rolls” section after registering at http://www.stackiq.com/download/). The Cluster Core Roll leads the user through a few simple forms (e.g., what is the IP address of StackIQ Cluster Manager, what is the gateway, DNS server, etc.) and then asks for a base OS DVD (for example, Red Hat Enterprise Linux 6.5; other Red Hat-like distributions such as CentOS are supported as well, but for RHEL OpenStack, only RHEL certified media are acceptable). The installer copies all the bits from both DVDs and automatically creates a new Red Hat distribution by blending the packages from both DVDs together.
The remainder of StackIQ Cluster Manager installation requires no further manual steps and this entire step takes between 30 to 40 minutes.
A detailed description of StackIQ Cluster Manager can be found in section 3 of the StackIQ Users Guide. It is strongly recommended that you familiarize yourself with at least this section before proceeding. (C’mon, really, it’s not that bad. The print is large and there are a bunch of pictures, shouldn’t take long.)
RHEL-Updates-04112014-0.x86_64.disk1.iso (The Heartbleed vulnerability is fixed in updates from RedHat and is contained in the RHEL-Updates roll.) (http://stackiq-release.s3.amazonaws.com/stack3/RHEL-Updates-04112014-0.x86_64.disk1.iso)
On StackIQ Cluster Manager download RHEL-Updates-04112014 roll ISO from: http://stackiq-release.s3.amazonaws.com/stack3/RHEL-Updates-04112014-0.x86_64.disk1.iso
Step 2. Install the RHEL OpenStack Bridge and RHEL OpenStack RPMS Rolls
StackIQ has developed software that “bridges” our core infrastructure management solution to Red Hat’s OpenStack Platform we’ve named the RHEL OpenStack Bridge Roll. The RHEL OpenStack Bridge Roll is used to spin up Foreman services by installing a Foreman appliance. This allows you to leverage RHEL’s Foreman OpenStack puppet integration to deploy a fully operational OpenStack Cloud.
StackIQ Cluster Manager uses the concept of “rolls” to combine packages (RPMs) and configuration (XML files which are used to build custom kickstart files) to dynamically add and automatically configure software services and applications.
The first step is to install a StackIQ Cluster Manager as a deployment machine. This requires that you use, at a minimum, the cluster-core and RHEL 6.5 ISOs. It’s not possible to add StackIQ Cluster Manager on an already existing RHEL 6.5 machine. You must start with the installation of StackIQ Cluster Manager. The rhel-openstack-bridge roll, rhel-6-server-openstack-4.0-rpms, and RHEL-Updates roll are not necessary at installation time, they can be added to StackIQ Cluster Manager after the fact. This saves on CD/DVD burning and time when adding multiple rolls during StackIQ Cluster Manager installation.
It is highly recommended that you check the MD5 checksums of the downloaded media
You must burn the cluster-core roll and RHEL Server 6.5 ISOs to disk, or, if installing via virtual CD/DVD, simply mount the ISOs on the machine's virtual media via the BMC.
Then follow this https://s3.amazonaws.com/stackiq-release/stack3/roll-cluster-core-usersguide.pdf for instructions on how to install StackIQ Cluster Manager in section 3. (Yes! I mentioned it again.)
Additionally, the following video takes you through the basic process of installing StackIQ Cluster Manager and backend nodes. Specific instructions for Foreman/OpenStack, follow.
Verify the MD5 checksums:# md5sum rhel-openstack-bridge-1.0-0.x86_64.disk1.iso
should return f7a2e2cef16d63021e5d2b7bc2b16189
# md5sum rhel-6-server-openstack-4.0-rpms-6.5-0.x86_64.disk1.iso
should return 19c05af49e53f90a2cc9bcd4dddb353f
# md5sum RHEL-Updates-04112014-0.x86_64.disk1.iso
should return 44b2aeb7ec26c9f1f15e615604101304
Then execute the following commands on the frontend:# rocks add roll rhel-openstack-bridge*rpm
# rocks enable roll rhel-openstack-bridge rhel-6-server-openstack-4.0-rpms RHEL-Updates-6.5
# rocks create distro
# rocks run roll rhel-openstack-bridge | sh
(The rhel-6-server-openstack-4.0-rpms and RHEL-Updates rolls do not contain any configuration scripts so do not need to be configured with a “rocks run roll” command.
StackIQ Cluster Manager is now configured and ready to install a Foreman appliance and the OpenStack backend nodes.
What You’ll Need:
On StackIQ Cluster Manager download rhel-6-server-openstack-4.0-rpms roll ISO from: http://stackiq-release.s3.amazonaws.com/stack3/rhel-6-server-openstack-4.0-rpms-6.5-0.x86_64.disk1.iso
- After StackIQ Cluster Manager is installed and booted, it is time to add the RHEL OpenStack Bridge, RHEL OpenStack RPMS roll, and RHEL-Updates roll:
- On the StackIQ Cluster Manager, download rhel-openstack-bridge roll ISO: http://stackiq-release.s3.amazonaws.com/stack3/rhel-openstack-bridge-1.0-0.x86_64.disk1.iso
- Cluster-core roll ISO: http://stackiq-release.s3.amazonaws.com/stack3/cluster-core-6.5-stack3.x86_64.disk1.iso
- RHEL Server 6.5 ISO (This is something you supply via Red Hat download from your Red Hat subscription.)
- rhel-openstack-bridge roll ISO: http://stackiq-release.s3.amazonaws.com/stack3/rhel-openstack-bridge-1.0-0.x86_64.disk1.iso. This will be added after StackIQ Cluster Manager is installed.
- rhel-6-server-openstack-4.0-rpms roll ISO: http://stackiq-release.s3.amazonaws.com/stack3/rhel-6-server-openstack-4.0-rpms-6.5-0.x86_64.disk1.iso. This will be added after StackIQ Cluster Manager is installed.
StackIQ Cluster Manager contains the notion of an “appliance.” An appliance has a kickstart structure that installs a preconfigured set of RPMS and services that allow for concentrated installation of a particular application. The bridge roll provides a “Foreman” appliance that sets up the automatic installation of the RHEL-OSP Foreman server with the required OpenStack infrastructure. It’s the fastest way to get a Foreman server up and running.
Installing Backend Nodes Using Discovery Mode in the StackIQ Cluster Manager GUI
“Discovery” mode allows the automatic installation of backend nodes without pre-populating the StackIQ Cluster Manager database with node names, IP addresses, MAC addresses, etc. The StackIQ Cluster Manager runs DHCP to answer and install any node making a PXE request on the subnet. This is ideal on networks when you, a) have full control of the network and the policies on the network and, b) you don’t care about the naming convention of your nodes. If one of these is not true, please follow the instructions for populating the database in the “Install Your Compute Nodes Using CSV Files” in the cluster-core roll documentation reference above (Section 3.4.2).
“Discovery” mode is no longer turned on by default, as it may conflict with a company’s networking policy. To turn on Discovery mode, in a terminal or ssh session on StackIQ Cluster Manager do the following:
# rocks set attr discover_start true
To turn it off after installation if you wish:
# rocks set attr discover_start false
DHCP is always running but with “discover_start” set to “false,” it will not promiscuously answer PXE requests.
With Discovery turned on, you can perform installation of backend nodes via the GUI or via the command line. To install via the GUI go the StackIQ Cluster Manager GUI at http://<StackIQ Cluster Manager hostname or IP>
Click on Appliance, and choose “Foreman” and click “Start.”
Boot the server you are using as the Foreman server. All backend nodes should be set to PXE first on the network interface attached to the private network. This is a hard requirement.In the GUI, you should see a server called “foreman-0-0” appear in the dialog, and in sufficient time, the Visualization area in the "Discover" tab will indicate the network traffic being used during installation.
The Foreman server appliance installation is somewhat chatty. You’ll receive status updates in the Messages box at the bottom of the page for what is happening on the node. The bare metal installation of the Foreman server is relatively short, about 20 minutes depending on the size of the disks being formatted. The installation of the Foreman application takes longer and happens after the initial boot due to RPM packaging constraints of the Foreman installer. It should be done, beginning to end, in about an hour.When the machine is up, the indicator next to it’s name will be green and there will be a message in the alerts box indicating the machine has installed Foreman.
Using the command line:
If for some reason you do not have access to the front-end web GUI or access is extremely slow, or if you just happen to be a command line person, there is a command to do discovery of backend resources.
To install a Foreman appliance:# insert-ethers
Choose “Foreman” and choose “OK”
Boot the machine and it should be discovered, assuming PXE first.
Once the Foreman server is installed, you can access it’s web interface by running Firefox on StackIQ Cluster Manager. It should be available at the IP address listed in a:
# rocks list host interface foreman-0-0
Adding an additional interface
If you want it accessible on the public or corporate network and not just on the private network, it will be necessary to add another network interface attached to the public network.
If the interface was detected during install:
# rocks set host interface ip
# rocks set host interface subnet public
If you add the interface after the fact:
# rocks add host interface help
And fill in the appropriate fields.
In either event, to make the network change live, sync the network:
# rocks sync host network foreman-0-0 restart=yesThis procedure is more clearly delineated in section 4.3 of the cluster-core roll documentation, referenced (twice!) above.
Step 4. Install the Backend Nodes
Before we install the backend nodes (also known as “compute nodes”), we want to ensure that all disks in the backend nodes are configured and controlled by the StackIQ Cluster Manager. On node reinstall, this prevents the inadvertent loss of data on disks that are not the system disk. Now, we don’t want to reconfigure the controller and reformat disks on every installation, so we need to instruct the StackIQ Cluster Manager to perform this task the next time the backend nodes install. We do this by setting an attribute (“nukedisks”):
# rocks set appliance attr compute nukedisks true
After node reinstallation, this attribute is automatically set to “false,” so the only way to reformat non-system disks, is to deliberately set this attribute to “true” before node reinstall.
Now we are ready to install the backend nodes. This is the same procedure that we used to install the Foreman server. This time, however, choose “Compute” as the appliance, whether you are using the web GUI or the CLI command “insert-ethers”.
Make sure the StackIQ Cluster Manager is in "discovery" mode using the CLI or GUI and all backend nodes are PXE booted. StackIQ Cluster Manager discovers and installs each backend node in parallel, packages are installed in parallel, and disks on the node are also formatted in parallel. All this parallelism allows us to install an entire cluster, no matter the size, in about 10 to 20 minutes -- no manual steps are required. For more information on installing and using the StackIQ Cluster Manager, please visit http://www.stackiq.com/support/ or http://www.youtube.com/stackiq. Please review the above video and section 3.4 of the cluster-core roll documentation for questions.
After all the nodes in the cluster are up and running, you will be ready to deploy OpenStack via the Foreman web interface. In this example, the StackIQ Cluster Manager node was named “kaiza” and the foreman server was named “foreman-0-0.” The compute nodes were assigned default names of compute-0-0, compute-0-1, compute-0-2.This is how it looks on the GUI when all the installs are completed.
Step 5. Configuring Foreman to Deploy OpenStack
The Foreman server as supplied by RHEL contains all the puppet manifests required to deploy machines with OpenStack roles. With the backend nodes installed and properly reporting to Foreman, we can go to the Foreman web GUI and configure the backend nodes to run OpenStack.
The example here will be for the simplest case: a Nova Network Controller using a single network, and a couple of hypervisors running the Cirros cloud image.
More complex cases (Neutron, Swift, Cinder) will follow in the next few weeks as appendices to this document. Feel free to experiment ahead of those instructions, however.
1. Go to https://Choose “Proceed Anyway” or, if in Firefox, accept the certificate, if the security certificate is not trusted.
You should get a login screen:
2. Login, the default username is “admin” and the default password is “changeme” Take the time to change the password once you log in, especially if the Foreman server is available to the outside world.
3. Add a controller nodeYou should see all the nodes you’ve installed listed on the Foreman Dashboard. Click on the “Hosts” tab to go to the hosts window.
Click on the machine you intend to use as a controller node. You will have to change some parameters to reflect the network you are using for OpenStack (in this example, the private one).
It’s highly recommended this machine also have a connection to the external network (www or corporate internet) to simplify web access. See “Adding an additional interface” above on how to do that. Do not choose the Foreman server as a controller node. The OpenStack Dashboard overwrites httpd configuration files and will disable the ability to log into the Foreman web server. However, If you have a small cluster, you can add the Foreman server as an OpenStack Compute node, as we do in this example. You may not want to do that in a larger cluster though. Separation of services is almost always a good thing.
Click on the “Parameters” tab. There are a lot of parameters here, but we will change the minimum to reflect our network.
Click the “Override” button next to the following parameters:
These parameters will be listed at the bottom of the page with text fields to change them. The controller_priv_host, mysql_host, and qpid_host, should all be changed to the private interface IP of the controller node, i.e. the machine you are editing right now.
The controller_pub_host should be the IP address of the public interface (if you have added one) of the controller node, i.e. the machine you are editing right now.If you don’t know the IP address of the controller node, in a terminal on the StackIQ Cluster Manager, do the following
This can been seen as below. Once you’ve made the changes, click “Submit.”
Once the puppet run finishes, you can add OpenStack Computes. (The puppet run on the controller node can take awhile to execute.)
Add OpenStack Compute Nodes
There isn’t much for an OpenStack Controller to do if it can’t launch instances of images, so we need a couple of hypervisors. We’ll do this a little differently than the Controller node, where we edited one individual machine, and instead, edit the “Host Group” we want the computes to run as. This allows us to make the changes once and apply them to all the machines.Go to “More” and choose “Configuration” from the drop down, then click on “Host Groups” in the next drop down.
Click on “Compute (Nova Network)” and it will bring you to an “Edit Compute (Nova Network)” screen:
We’re going to edit a number of fields, similar to the the Controller node. Click the “override” button on each of the following parameters and edit them at the bottom of the page:
controller_priv_host - set to private IP address of controller
controller_pub_host - set to public IP address of controller
mysql_host - set to private IP address of controller
qpid_host - set to private IP address of controller
nova_network_private_iface - the device of the private network interface
The nova_network_*_iface default to em1 and em2. These may work on the machines in your cluster, and you may not have to change them. Since the test cluster is on older hardware, eth0, eth1, and eth2 are where the networks sit. So for this test cluster, the appropriate changes are as below. The test cluster needs the eth2 interface for the public network because it is using the foreman-0-0 as a compute node. If your Foreman node is not part of your test cluster, you may not need to change this.More advanced networking configurations, i.e. when using multiple networks or using Neutron, may require additional parameters.
Click submit. Any host that is listed with the “Compute (Nova Network)” role, will inherit these parameters.
Now lets add the hosts that will belong to the “Host Group” Compute (Nova Network).Go to the “Hosts” tab once again, and choose all the hosts that will run as Nova Network Computes. In this example, since it’s such a small cluster, we’ll add the “foreman-0-0” machine as an OpenStack Compute:
Now click on “Select Action” and choose “Change Group.”
Click on “Select Host Group” and choose “Compute (Nova Network)” then click “Submit.”
The hosts should show the group they’ve been assigned to:
Again, you can wait for the Puppet run or spawn it yourself from StackIQ Cluster Manager. Since we have a group of machines, we will use “rocks run host” to spawn “puppet agent -tv” on all the machines:
Once puppet has finished, log into the OpenStack Controller Dashboard to start using OpenStack.
To access the controller node, go to http://<controller node ip> . This is accessible on either the public IP you confignodeured for this machine or at the private IP. If you have only configured this on the private IP, you’ll have to open a browser from StackIQ Cluster Manager or port forward SSH to the private IP from your desktop.
The username is “admin” and the password was randomly generated during the Controller puppet install. To get this password, go to the Foreman web GUI, click on the “Hosts” tab and click on the host name of the Controller host:
The click “Edit” and go to the “Parameters” tab:
Copy the “admin_password” string:
Paste it into the password field on the OpenStack Dashboard and click “Submit.”
You should now be logged into the OpenStack Dashboard
Click on “Hypervisors,” you should see the three OpenStack compute nodes you’ve deployed.
As a simple example, we’ll deploy the Cirros cloud image that OpenStack uses in their documentation.
Click on “Images.”
Click on “Create Image” and you’ll be presented with the image configuration window.
Fill in the required information:
Name - we’ll just use “cirros”
Image Source - use default “Image Location”
Image Location - http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
Why do we know this? Because I looked it up here: http://docs.OpenStack.org/image-guide/content/ch_obtaining_images.html
Format - QCOW2
And make it “Public”Then click “Create Image.”
The image will show a status of “Queued.”
And when it’s downloaded and available to create Instances, it will be labeled as “Active.”
Cool! Now we can actually launch an instance and access it.
Click on “Project” then on “Instances” in the sidebar:
Click on “Launch Instance.”
Fill out the parameters:
Availablility Zone - nova, default
Instance Name - we’ll call it cirros-1
Flavor - m1.tiny, default
Instance Count - 1, default
Instance Boot Source - Select “Boot from Image”
Image Name - Select “cirros”
Setup security so we can log into the instance. Choose Access and Security, edit the default, for this example, we’re just making this a very promiscous server.
Click on “Launch”
Logging into the Instance
In this simple example, to log into the instance, you must log into the hypervisor where the instance is running. Subsequent blog posts will deal with more transparent access for users.
To find out which hypervisor your instance is running on, go to the “Admin” panel from the left sidebar and click on “Instances.”
We can see the instance is running on compute-0-1 with a 10.0.0.2 IP. So from a terminal on the frontend, ssh into the hypervisor compute-0-1.
Now log into the instance as user “cirros” with password “cubswin:)” (The password includes the emoji.)
Now you can run Linux commands to prove to yourself you have a functioning instance:
There are times when a machine needs to be reinstalled: hardware changes or repair, uncertainty about a machine’s state, etc. A reinstall generally takes care of these issues. The goal of StackIQ Cluster Manager is to have software homogeneity across heterogeneous hardware. StackIQ Cluster Manager allows you to have immediate consistency of your software stacks on first boot. One of the ways we do this is by making reinstallation of your hardware as fast as possible (reinstalling 1000 nodes is about as fast as reinstalling 10.) and correct when a machine comes back up.
One of the difficulties with the OpenStack puppet deployment is certificate management. When a machine is first installed and communicates with Foreman, a persistent puppet certificate is created. When a machine is re-installed or replaced, the key needs to be removed in order for the machine to resume its membership in the cluster. StackIQ Cluster Manager takes care of this by watching for reinstallation events and communicating with the Foreman server to remove the puppet certificate. When the machine finishes installing, the node will rejoin the cluster automatically. In the instance of a reinstall, if the OpenStack role has been set for this machine, the node will do the appropriate puppet run and rejoin OpenStack in the assigned role, and you really don’t have to do anything special for that to happen.
To reinstall a machine:
# rocks run host “/boot/kickstart/cluster-kickstart-pxe”
# rocks set host boot action=install
# rocks run host “reboot”
If you wish to start with a completely clean machine and don’t care about the data on it, set the “nukedisks” flag to true before doing one of the above installation commands:
# rocks set host attr nukedisks true
StackIQ Cluster Manager has been used to run multi-use clusters with different software stacks assigned to different sets of machines. The OpenStack implementation is like that. If you want to allocate machines for another application and you’re using the RHEL OpenStack Bridge roll, then you can turn off OpenStack deployment on certain machines, and they will not be set-up to participate in the OpenStack environment. To do this, simply do the following:
# rocks set host attr has_openstack false
The bridge roll sets every compute node to participate in the OpenStack distribution. Throwing this flag for a host means the machine will not participate in the OpenStack deployment. If the machine was first installed with OpenStack, then you will have to reinstall after setting this attribute.
Red Hat provides updates to RHEL-OSP and to RHEL Server regularly. StackIQ tracks these updates and will provide updated rolls for critical patches or service updates to RHEL-OSP. Additionally, if your frontend is properly subscribed to RHN or to Subscription manager, these updates can be easily pulled and updated with the "rocks create mirror" command. Updating deserves a blog post of its own and will be forthcoming.
Admittedly, we are documenting the simplest use case - Nova Networking on a single network. This is not ideal for production systems, but by now, you should be able to see how you can use the different components, StackIQ Cluster Manager, Foreman, and OpenStack Dashboard to easily configure and deploy OpenStack. Adding complexity can be done as you explore the RHEL OpenStack ecosystem to fit your company’s needs.In the future, we will provide further documentation on deploying Neutron, Swift, and Cinder. Additionally, layering OpenStack roles (Swift and Compute for instance) will be topics we will be exploring and blogging about as we move forward with Red Hat’s OpenStack Platform. Stay tuned!
Using StackIQ Cluster Manager for deploying clusters: https://s3.amazonaws.com/stackiq-release/stack3/roll-cluster-core-usersguide.pdf. Video: https://www.youtube.com/watch?v=gVPZcA-yHQY&list=UUgg-AnfqnNCp-DxpVEfJkuA
RHEL OpenStack Documentation: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/