The hardware used for this deployment was a small cluster: 1 node (i.e., 1 server) is used for the StackIQ Cluster Manager, 1 node is used as the Foreman server, and 3 nodes are used as backend/data nodes. Each node has 1 disk and all nodes are connected together via 1Gb Ethernet on a private network. StackIQ Cluster Manager, Foreman server, and OpenStack controller nodes are also connected to a corporate public network using the second NIC. Additional networks dedicated to OpenStack services can also be used but are not depicted in this graphic or used in this example. StackIQ Cluster Manager has been used in similar deployments between 2 nodes and 4,000+ nodes.
Step 1. Install StackIQ Cluster Manager
The StackIQ Cluster Manager node is installed from bare-metal (i.e., there is no pre-requisite software and no operating system previously installed) by burning the StackIQ Cluster Core Roll ISO to DVD and booting from it (the StackIQ Cluster Core Roll can be obtained from the “Rolls” section after registering at http://www.stackiq.com/download/). The Cluster Core Roll leads the user through a few simple forms (e.g., what is the IP address of StackIQ Cluster Manager, what is the gateway, DNS server, etc.) and then asks for a base OS DVD (for example, Red Hat Enterprise Linux 6.5; other Red Hat-like distributions such as CentOS are supported as well, but for Red Hat Enterprise Linux, only certified media is acceptable). The installer copies all the bits from both DVDs and automatically creates a new Red Hat distribution by blending the packages from both DVDs together.
The remainder of StackIQ Cluster Manager installation requires no further manual steps and this entire step takes between 30 to 40 minutes.
A detailed description of StackIQ Cluster Manager can be found in section 3 of the StackIQ Users Guide. It is strongly recommended that you familiarize yourself with at least this section before proceeding. (C’mon, really, it’s not that bad. The print is large and there are a bunch of pictures, shouldn’t take long.)
If you have further questions, please contact firstname.lastname@example.org for additional informatio.
Step 2. Install the Red Hat Enterprise Linux OpenStack Bridge
StackIQ has developed software that “bridges” our core infrastructure management solution to Red Hat’s OpenStack Platform we’ve named the RHEL OpenStack Bridge Roll. The RHEL OpenStack Bridge Roll is used to spin up Foreman services by installing a Foreman appliance. This allows you to leverage RHEL’s Foreman OpenStack puppet integration to deploy a fully operational OpenStack Cloud.
StackIQ Cluster Manager uses the concept of “rolls” to combine packages (RPMs) and configuration (XML files which are used to build custom kickstart files) to dynamically add and automatically configure software services and applications.
The first step is to install a StackIQ Cluster Manager as a deployment machine. This requires that you use, at a minimum, the cluster-core and RHEL 6.5 ISOs. It’s not possible to add StackIQ Cluster Manager on an already existing RHEL 6.5 machine. You must start with the installation of StackIQ Cluster Manager. The rhel-openstack-bridge roll can be added once the StackIQ Cluster Manager is up. We will also
It is highly recommended that you check the MD5 checksums of the downloaded media
You must burn the cluster-core roll and RHEL Server 6.5 ISOs to disk, or, if installing via virtual CD/DVD, simply mount the ISOs on the machine's virtual media via the BMC.
Then follow this https://s3.amazonaws.com/stackiq-release/stack3/roll-cluster-core-usersguide.pdf for instructions on how to install StackIQ Cluster Manager in section 3. (Yes! I mentioned it again.)
What You’ll Need:
- After StackIQ Cluster Manager is installed and booted, add the Red Hat Enterprise Linux OpenStack Bridge, create the Red Hat Enterprise Linux OpenStack roll, and create an updated Red Hat Enterprise Linux Server distribution roll.
- Creating the
Copy the roll to a directory on the StackIQ Cluster Manager. "/export" is a good place as it should be the largest partition.
Verify the MD5 checksums:
# md5sum rhel-openstack-bridge-1.0-0.x86_64.disk1.iso
should return f7a2e2cef16d63021e5d2b7bc2b16189
Then execute the following commands on the frontend:
# rocks add roll rhel-openstack-bridge*.iso # rocks enable roll rhel-openstack-bridge# rocks create distro# rocks run roll rhel-openstack-bridge | sh
The OpenStack bridge roll will enable you to set-up a Foreman appliance which will then be used to deploy OpenStack roles.
Step 3. Completing the Red Hat Enterprise Linux 6.5 and OpenStack Platform Deployment
We need to get the Red Hat Enterprise Linux OpenStack Platform and Red Hat Enterprise Linux 6.5 updates before a full deployment is possible. The latest version of Red Hat Enterprise Linux OpenStack Platform requires updates only available in the Red Hat Enterprise Linux 6.5. Fortunately, if you have a Red Hat Subscription license for these two components, creating and adding these as rolls is easy with StackIQ Cluster Manager. The only caveat is this will take some time, depending on your network. Make a pot of coffee, get some donuts, and proceed with the steps below to make the required rolls.
- If the StackIQ Cluster Manager has web access, enable your Red Hat Enterprise Linux 6.5 and Red Hat Enterprise Linux OpenStack Platform subscriptions with subscription-manager on the StackIQ Cluster Manager. Explaining how to do this is out of scope for this document. Please refer to the Red Hat documentation on how to do this: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Installation_Guide/s1-steps-rhnreg-x86.html (You'll have to do a "# yum -y install subscription-manager subscription-manager-gui" first.)
- A valid subscription via Satellite Server for your company. If the StackIQ Cluster Manager has access to your company's subscriptions via a company or subnet Satellite server, you can create the the required rolls from the repository URLs available from you company's Satellite server.
- Once you have properly subscribed StackIQ CM obtain the repoids needed:
- The repoid for Red Hat Enterprise Linux OpenStack Platform: in this example, since the StackIQ CM does have web access the url we will use is: "rhel-6-server-openstack-4.0-rpms" (If your system is properly configured for subscription, this can be obtained by running "# yum repolist enabled" on the StackIQ CM command line.)
- The repoid for Red Hat Enterprise Linux 6 Server: in this example, since the StackIQ CM does have web access the url we will use is: "rhel-6-server-rpm" (Also obtained by running "# yum repolist enabled" on the command line.
Creating the rolls:
StackIQ Cluster Manager can create a roll that encapsulates all the required RPMS for a repository. This allows for multiple rolls and multiple distributions to be run on different sets of hardware. At this basic level, know that we are creating a roll that will allow us to have all the RPMS available to us during kickstart of the backend nodes to fully deploy the Foreman/OpenStack infrastructure.
Prior to creating the rolls, let's check our known roll state. On the StackIQ CM command line, list the current rolls:
# rocks list roll
We have the two rolls added at install time: RHEL and rhel-openstack-bridge. Now we want to create the rolls that will allow us to add the Red Hat Enterprise Linux 6 Server updates and Red Hat Enterprise Linux OpenStack Platform.
"cd" to /export on the StackIQ CM. This is the largest partition and a good place to pull down these repositories. Then using the repoid you can obtain by running:
# cd /export
# yum repolist enabled
# rocks create mirror repoid=rhel-6-server-openstack-4.0-rpms rollname=rhel-openstack-4.0
And breath or drink coffee or check email or write a letter to your mother (Hey, she probably hasn't heard from you in a while, am I right?). This is going to take some time depending on your network.
When all the command are run and completed, you'll have an ISO with all the Red Hat Enterprise Linux OpenStack Platform RPMs in a directory that is the same name as the "repoid" above. The ISO in that directory will have the "rollname" as the ISO.
# ls rhel-6-server-openstack-4.0-rpms
All that will look like this:
Let's add the roll to the distribution:
# rocks add roll rhel-6-server-openstack-4.0-rpms/rhel-openstack-4.0-6.5.update1-0.x86_64.disk1.iso
And then enable it using the name listed in the first column of a "# rocks list roll"
# rocks enable roll rhel-openstack-4.0
It looks like this:
Now we'll do the same thing to get the most recent Red Hat Enterprise Linux 6 Server RPMs so we can take advantage of the full set of updates for that distribution. (Covers the OpenSSL Heartbleed bug and updates some RPMs required for the latest version of Red Hat Enterprise Linux OpenStack Platform.)
This is the same as the preceding process, so we'll just show the commands and an ending screenshot.
# rocks create mirror repoid=rhel-6-server-rpms name=RHEL-6-Server-Updates-06122014
This will take more time than the OpenStack repository. If you didn't write your mother then, do it now. More coffee is always an option. So is a second (or third!) donut. Lunch might be in order, a longish one.
(The "name" parameter will enable us to keep track of repository mirrors by date. This allows us to add updates on a roll basis without overwriting previous distributions. Testing new distributions becomes easy this way by assigning rolls to new distributions and machines to the distribution, providing delineation between production and test environments. If this doesn't make sense, don't worry, it's getting into cluster life-cycle management, and you'll understand it when you have to deal with it.)
Once we have the repo, a roll has been created. Let's add it:
# rocks add roll rhel-6-server-rpms/rhel-6-server-rpms-6.5.update1-0.x86_64.disk1.iso
# rocks list roll
to check the name.
# rocks enable roll rhel-6-server-rpms
Disable the original RHEL roll. Since the Updates roll contains all that was old and all that is new.
# rocks disable roll RHEL
# rocks list roll
This is how all that looks on the command line:
Now recreate the distribution the backend nodes will install from. This creates one repository for kickstart to pull from during backend node installation or during yum updates of individual packages.
# rocks create distro
Breath. Sign your mother's letter and address the envelope. The distro create should be done by then because you probably have to find her address after all these years of not writing her.
Starts like this:
And ends like this:
Update the StackIQ Cluster Manager
Now we're going to update the StackIQ Cluster Manager at this point before installling any backend nodes. It's good hygiene and gets us running the lastest and great Red Hat Enterprise Linux 6.5.
# yum -y update
Then reboot when it's done. Once the machine comes back up, you can install the Foreman server and then the compute nodes. The steps to do that come next.
Step 4. Install the Foreman Appliance
StackIQ Cluster Manager contains the notion of an “appliance.” An appliance has a kickstart structure that installs a preconfigured set of RPMS and services that allow for concentrated installation of a particular application. The bridge roll provides a “Foreman” appliance that sets up the automatic installation of the Red Hat Foreman server with the required OpenStack infrastructure. It’s the fastest way to get a Foreman server up and running.
Installing Backend Nodes Using Discovery Mode in the StackIQ Cluster Manager GUI
“Discovery” mode allows the automatic installation of backend nodes without pre-populating the StackIQ Cluster Manager database with node names, IP addresses, MAC addresses, etc. The StackIQ Cluster Manager runs DHCP to answer and install any node making a PXE request on the subnet. This is ideal on networks when you, a) have full control of the network and the policies on the network and, b) you don’t care about the naming convention of your nodes. If one of these is not true, please follow the instructions for populating the database in the “Install Your Compute Nodes Using CSV Files” in the cluster-core roll documentation reference above (Section 3.4.2).
“Discovery” mode is no longer turned on by default, as it may conflict with a company’s networking policy. To turn on Discovery mode, in a terminal or ssh session on StackIQ Cluster Manager do the following:
# rocks set attr discover_start true
To turn it off after installation if you wish:
# rocks set attr discover_start false
DHCP is always running but with “discover_start” set to “false,” it will not promiscuously answer PXE requests. (In the next release this will simply be a button to turn on and off "discovery.")
With Discovery turned on, you can perform installation of backend nodes via the GUI or via the command line. To install via the GUI go the StackIQ Cluster Manager GUI at http://<StackIQ Cluster Manager hostname or IP>
Click the Login link and login as “root” with the password set during installation for “root”
Go to the “Discover” tab:
Click on Appliance, and choose “Foreman” and click “Start.”
Boot the server you are using as the Foreman server. All backend nodes should be set to PXE first on the network interface attached to the private network. This is a hard requirement.
In the GUI, you should see a server called “foreman-0-0” appear in the dialog, and in sufficient time, the Visualization area in the "Discover" tab will indicate the network traffic being used during installation.
The Foreman server appliance installation is somewhat chatty. You’ll receive status updates in the Messages box at the bottom of the page for what is happening on the node. The bare metal installation of the Foreman server is relatively short, about 20 minutes depending on the size of the disks being formatted. The installation of the Foreman application takes longer and happens after the initial boot due to RPM packaging constraints of the Foreman installer. It should be done, beginning to end, in about an hour.
When the machine is up, the indicator next to it’s name will be green and there will be a message in the alerts box indicating the machine has installed Foreman.
Using the command line:
If for some reason you do not have access to the front-end web GUI or access is extremely slow, or if you just happen to be a command line person, there is a command to do discovery of backend resources.
To install a Foreman appliance:
Choose “Foreman” and choose “OK”
Boot the machine and it should be discovered, assuming PXE first.
Once the Foreman server is installed, you can access it’s web interface by running Firefox on StackIQ Cluster Manager. It should be available at the IP address listed in a:
# rocks list host interface foreman-0-0
Adding an additional interface
If you want it accessible on the public or corporate network and not just on the private network, it will be necessary to add another network interface attached to the public network.
If the interface was detected during install:
# rocks set host interface ip
# rocks set host interface subnet public
If you add the interface after the fact:
# rocks add host interface help
And fill in the appropriate fields.
In either event, to make the network change live, sync the network:
# rocks sync host network foreman-0-0 restart=yes
This procedure is more clearly delineated in section 4.3 of the cluster-core roll documentation, referenced (twice!) above.
Step 5. Install the Backend Nodes
Before we install the backend nodes (also known as “compute nodes”), we want to ensure that all disks in the backend nodes are configured and controlled by the StackIQ Cluster Manager. On node reinstall, this prevents the inadvertent loss of data on disks that are not the system disk. Now, we don’t want to reconfigure the controller and reformat disks on every installation, so we need to instruct the StackIQ Cluster Manager to perform this task the next time the backend nodes install. We do this by setting an attribute (“nukedisks”):
# rocks set appliance attr compute nukedisks true
After node reinstallation, this attribute is automatically set to “false,” so the only way to reformat non-system disks, is to deliberately set this attribute to “true” before node reinstall.
Now we are ready to install the backend nodes. This is the same procedure that we used to install the Foreman server. This time, however, choose “Compute” as the appliance, whether you are using the web GUI or the CLI command “insert-ethers”.
Make sure the StackIQ Cluster Manager is in "discovery" mode using the CLI or GUI and all backend nodes are PXE booted. StackIQ Cluster Manager discovers and installs each backend node in parallel, packages are installed in parallel, and disks on the node are also formatted in parallel. All this parallelism allows us to install an entire cluster, no matter the size, in about 10 to 20 minutes -- no manual steps are required. For more information on installing and using the StackIQ Cluster Manager, please visit http://www.stackiq.com/support/ or http://www.youtube.com/stackiq. Please review the above video and section 3.4 of the cluster-core roll documentation for questions.
After all the nodes in the cluster are up and running, you will be ready to deploy OpenStack via the Foreman web interface. In this example, the StackIQ Cluster Manager node was named “kaiza” and the foreman server was named “foreman-0-0.” The compute nodes were assigned default names of compute-0-0, compute-0-1, compute-0-2.
This is how it looks on the GUI when all the installs are completed.
Step 5. Configuring Foreman to Deploy OpenStack
The Foreman server as supplied by RHEL contains all the puppet manifests required to deploy machines with OpenStack roles. With the backend nodes installed and properly reporting to Foreman, we can go to the Foreman web GUI and configure the backend nodes to run OpenStack.
The example here will be for the simplest case: a Nova Network Controller using a single network, and a couple of hypervisors running the Cirros cloud image.
More complex cases (Neutron, Swift, Cinder) will follow in the next few weeks as appendices to this document. Feel free to experiment ahead of those instructions, however.
1. Go to https://
Choose “Proceed Anyway” or, if in Firefox, accept the certificate, if the security certificate is not trusted.
You should get a login screen:
2. Login, the default username is “admin” and the default password is “changeme” Take the time to change the password once you log in, especially if the Foreman server is available to the outside world.
3. Add a controller node
You should see all the nodes you’ve installed listed on the Foreman Dashboard. Click on the “Hosts” tab to go to the hosts window.
Click on the machine you intend to use as a controller node. You will have to change some parameters to reflect the network you are using for OpenStack (in this example, the private one).
It’s highly recommended this machine also have a connection to the external network (www or corporate internet) to simplify web access. See “Adding an additional interface” above on how to do that. Do not choose the Foreman server as a controller node. The OpenStack Dashboard overwrites httpd configuration files and will disable the ability to log into the Foreman web server. However, If you have a small cluster, you can add the Foreman server as an OpenStack Compute node, as we do in this example. You may not want to do that in a larger cluster though. Separation of services is almost always a good thing.
Click on the host, we will use “compute-0-0.” When the “compute-0-0” page comes up, click on “Edit.”
You should see a page called “Edit compute-0-0.local.” Set the “Host Group” tab to “Controller (Nova Network).” (An example of Neutron networking will follow in later Appendices to this document.)
Click on the “Parameters” tab. There are a lot of parameters here, but we will change the minimum to reflect our network.
Click the “Override” button next to the following parameters:
These parameters will be listed at the bottom of the page with text fields to change them. The controller_priv_host, mysql_host, and qpid_host, should all be changed to the private interface IP of the controller node, i.e. the machine you are editing right now.
The controller_pub_host should be the IP address of the public interface (if you have added one) of the controller node, i.e. the machine you are editing right now.
If you don’t know the IP address of the controller node, in a terminal on the StackIQ Cluster Manager, do the following
The IP address for the controller_pub_network, in this instance, is on eth2, we set it that way and cabled it to the corporate external network, and has IP address 192.168.1.60.
This can been seen as below. Once you’ve made the changes, click “Submit.”
Going back to the “Hosts” tab, you should see that “compute-0-0.local” is has the “Controller (Nova Network)” role.
There is a puppet agent that runs on each machine. It runs every 30 minutes. This will automatically update the machine’s configuration and make it the OpenStack Controller. If you don’t want to wait that long, start the puppet process yourself from StackIQ Cluster Manager. (Alternatively, you can ssh to compute-0-0 and manually run “puppet agent -tv”.)
Once the puppet run finishes, you can add OpenStack Computes. (The puppet run on the controller node can take awhile to execute.)
Add OpenStack Compute Nodes
There isn’t much for an OpenStack Controller to do if it can’t launch instances of images, so we need a couple of hypervisors. We’ll do this a little differently than the Controller node, where we edited one individual machine, and instead, edit the “Host Group” we want the computes to run as. This allows us to make the changes once and apply them to all the machines.
Go to “More” and choose “Configuration” from the drop down, then click on “Host Groups” in the next drop down.
Click on “Compute (Nova Network)” and it will bring you to an “Edit Compute (Nova Network)” screen:
Choose the “Parameters” tab:
We’re going to edit a number of fields, similar to the the Controller node. Click the “override” button on each of the following parameters and edit them at the bottom of the page:
controller_priv_host - set to private IP address of controller
controller_pub_host - set to public IP address of controller
mysql_host - set to private IP address of controller
qpid_host - set to private IP address of controller
nova_network_private_iface - the device of the private network interface
The nova_network_*_iface default to em1 and em2. These may work on the machines in your cluster, and you may not have to change them. Since the test cluster is on older hardware, eth0, eth1, and eth2 are where the networks sit. So for this test cluster, the appropriate changes are as below. The test cluster needs the eth2 interface for the public network because it is using the foreman-0-0 as a compute node. If your Foreman node is not part of your test cluster, you may not need to change this.
More advanced networking configurations, i.e. when using multiple networks or using Neutron, may require additional parameters.
Click submit. Any host that is listed with the “Compute (Nova Network)” role, will inherit these parameters.
Now lets add the hosts that will belong to the “Host Group” Compute (Nova Network).
Go to the “Hosts” tab once again, and choose all the hosts that will run as Nova Network Computes. In this example, since it’s such a small cluster, we’ll add the “foreman-0-0” machine as an OpenStack Compute:
Now click on “Select Action” and choose “Change Group.”
Click on “Select Host Group” and choose “Compute (Nova Network)” then click “Submit.”
The hosts should show the group they’ve been assigned to:
Again, you can wait for the Puppet run or spawn it yourself from StackIQ Cluster Manager. Since we have a group of machines, we will use “rocks run host” to spawn “puppet agent -tv” on all the machines:
If we had chosen only the Compute nodes for OpenStack Compute role and not the Foreman node, we could do this on just the computes by specifying their appliance type:
Once puppet has finished, log into the OpenStack Controller Dashboard to start using OpenStack.
To access the controller node, go to http://<controller node ip> . This is accessible on either the public IP you configured for this machine or at the private IP. If you have only configured this on the private IP, you’ll have to open a browser from StackIQ Cluster Manager or port forward SSH to the private IP from your desktop.
The username is “admin” and the password was randomly generated during the Controller puppet install. To get this password, go to the Foreman web GUI, click on the “Hosts” tab and click on the host name of the Controller host:
The click “Edit” and go to the “Parameters” tab:
Copy the “admin_password” string:
Paste it into the password field on the OpenStack Dashboard and click “Submit.”
You should now be logged into the OpenStack Dashboard
Click on “Hypervisors,” you should see the three OpenStack compute nodes you’ve deployed.
As a simple example, we’ll deploy the Cirros cloud image that OpenStack uses in their documentation.
Click on “Images.”
Click on “Create Image” and you’ll be presented with the image configuration window.
The image will show a status of “Queued.”
And when it’s downloaded and available to create Instances, it will be labeled as “Active.”
Cool! Now we can actually launch an instance and access it.
Adding an Instance:
Click on “Project” then on “Instances” in the sidebar:
Click on “Launch Instance.”
Fill out the parameters:
Availablility Zone - nova, default
Instance Name - we’ll call it cirros-1
Flavor - m1.tiny, default
Instance Count - 1, default
Instance Boot Source - Select “Boot from Image”
Image Name - Select “cirros”
It should look like this:
Now click "Launch."
You should see a transient “Success” notification on the OpenStack Dashboard and then the instance should start spawning.
When instance is ready for use, it will show as “Active” with power state “Running,” and log-in should work. (Cirros login is “cirros” and password is “cubswin:)” with the emoji.)
Logging into the Instance
In this simple example, to log into the instance, you must log into the hypervisor where the instance is running. Subsequent blog posts will deal with more transparent access for users.
To find out which hypervisor your instance is running on, go to the “Admin” panel from the left sidebar and click on “Instances.”
We can see the instance is running on compute-0-1 with a 10.0.0.2 IP. So from a terminal on the frontend, ssh into the hypervisor compute-0-1.
Now log into the instance as user “cirros” with password “cubswin:)” (The password includes the emoji.)
Now you can run Linux commands to prove to yourself you have a functioning instance:
There are times when a machine needs to be reinstalled: hardware changes or repair, uncertainty about a machine’s state, etc. A reinstall generally takes care of these issues. The goal of StackIQ Cluster Manager is to have software homogeneity across heterogeneous hardware. StackIQ Cluster Manager allows you to have immediate consistency of your software stacks on first boot. One of the ways we do this is by making reinstallation of your hardware as fast as possible (reinstalling 1000 nodes is about as fast as reinstalling 10.) and correct when a machine comes back up.
One of the difficulties with the OpenStack puppet deployment is certificate management. When a machine is first installed and communicates with Foreman, a persistent puppet certificate is created. When a machine is re-installed or replaced, the key needs to be removed in order for the machine to resume its membership in the cluster. StackIQ Cluster Manager takes care of this by watching for reinstallation events and communicating with the Foreman server to remove the puppet certificate. When the machine finishes installing, the node will rejoin the cluster automatically. In the instance of a reinstall, if the OpenStack role has been set for this machine, the node will do the appropriate puppet run and rejoin OpenStack in the assigned role, and you really don’t have to do anything special for that to happen.
To reinstall a machine:
# rocks run host “/boot/kickstart/cluster-kickstart-pxe”
# rocks set host boot action=install
# rocks run host “reboot”
If you wish to start with a completely clean machine and don’t care about the data on it, set the “nukedisks” flag to true before doing one of the above installation commands:
# rocks set host attr nukedisks true
StackIQ Cluster Manager has been used to run multi-use clusters with different software stacks assigned to different sets of machines. The OpenStack implementation is like that. If you want to allocate machines for another application and you’re using the RHEL OpenStack Bridge roll, then you can turn off OpenStack deployment on certain machines, and they will not be set-up to participate in the OpenStack environment. To do this, simply do the following:
# rocks set host attr has_openstack false
The bridge roll sets every compute node to participate in the OpenStack distribution. Throwing this flag for a host means the machine will not participate in the OpenStack deployment. If the machine was first installed with OpenStack, then you will have to reinstall after setting this attribute.
Red Hat provides updates Red Hat Enterprise Linux OpenStack Platorm and to Red Hat Enterprise Linux Server regularly. StackIQ tracks these updates and will provide updated rolls for critical patches or service updates to Red Hat Enterprise Linux OpenStack Platform. Additionally, if your frontend is properly subscribed to RHN or to Subscription manager, these updates can be easily pulled and updated with the "rocks create mirror" command. Updating and subscription management deserves a blog post of its own and will be forthcoming.
Admittedly, we are documenting the simplest use case - Nova Networking on a single network. This is not ideal for production systems, but by now, you should be able to see how you can use the different components, StackIQ Cluster Manager, Foreman, and OpenStack Dashboard to easily configure and deploy OpenStack. Adding complexity can be done as you explore the Red Hat Enterprise Linux OpenStack Platform ecosystem to fit your company’s needs.
In the future, we will provide further documentation on deploying Neutron, Swift, and Cinder. Additionally, layering OpenStack roles (Swift and Compute, for instance) will be topics we will be exploring and blogging about as we move forward with Red Hat Enterprise Linux OpenStack Platform. Stay tuned!
Greg Bruno, Ph.D., VP of Engineering, StackIQ