The Pain Curve: The Complexity of Clusters and Why Clusters are So Different

Posted by Greg Bruno on Jul 21, 2014 4:17:00 PM

I've been building clusters for my entire professional career and I've known that clusters are different, but never could quite articulate why. Until now. 

After having many conversations with operations team members from a broad cross-section of enterprises, I now have a handle on why clusters are so different from farms of single-purpose servers that reside in traditional data centers.

For every organization that operates a cluster with traditional data center tools, there is what I call a "Pain Curve" (see diagram below). It is difficult to quantify the number of servers required to reach the pain threshold, which is dependent on the size and quality of the operations staff. But one thing is certain – for those who don't have an automated solution that can address the cluster requirements of uniform software stacks, consistent service configurations, and total server awareness, real pain is coming and failure is inevitable.

Due of the rise of Hadoop and OpenStack, many enterprises are now deploying their first small clusters of 10 to 20 servers. At this small scale, the complexity of operating the cluster looks and feels like 10-20 general data center servers – that is because we are on the far left side of the operational complexity graph below. It is not until the clusters scale, as they inevitably do, when the pain caused by the exponential complexity becomes apparent. We've seen this problem occur time and time again.

dr-brunos-pain-curve-5

Consider one real-world example involving a top-tier financial services company. They were building a Hadoop cluster, and their projected production cluster was scoped to be 100 servers. They had plenty of experience running 100-server clusters before, and they felt they had the situation under control. As they did in the past, they cobbled together a home-grown project to manage their small cluster.

Soon-after they put the 100-server cluster into production and, once operational, the machine generated so much value for the business that demand for it skyrocketed and they had to scale the cluster. The cluster was expanded to 350 servers, but somewhere between 100 and 350 is where the home-grown project failed. All 350 servers went down – the cluster effectively became a multi-million dollar paperweight.

They had crossed the pain threshold, and they recognized that their home-grown project had landed them in the “Failure Zone.” 

Finally, after months of pain and inevitable failure, this company utilized an automated solution that was meticulously designed to manage clusters at scale, and the company was able to put all 350 servers back online again with a sustainably configured architecture in just 36 hours.

 

Clustered Servers See the World Differently

Why did this global financial services firm have such trouble solving this seemingly simple problem? Because the worldview of a single-purpose server in a traditional data center is that it accepts external requests, processes those requests, then responds to the requester. It is like a thoroughbred wearing blinders in the Kentucky Derby. Such a detached server has no notion of any of the other hundreds or thousands of servers that are happily churning through requests in the adjoining racks. As new servers are added to the data center, they are racked and stacked, installed and configured, then brought online -- the existing servers remain untouched.

The worldview of a server in a cluster is vastly different. By definition, each server in a cluster must be aware of every other server in the cluster. This is so the servers in the cluster can “collectively” accept external requests, process those requests, and then respond to the request – as a team.

At the absolute minimum, each server in a cluster must know about all the other servers in the cluster. Additionally, each cluster service (e.g., Hadoop services) must be configured with the awareness of the other services on all the cluster servers. And, more often than not, each service must be executing on top of the exact same software stack on each cluster server in order to produce consistent and correct results. 

In short, all cluster servers must: 1) have the exact same bits on each server; 2) have the exact same software configuration; and 3) have total awareness of each of the cluster servers. As new servers are added to a cluster, the new servers must satisfy all three of the above requirements (same bits, same configuration and total awareness). Moreover, the existing clustered servers must now be aware of the newly added servers.

Back to the single-purpose servers in the data center. Since each server is an island, as new servers are added, the complexity added to the operations team increases by a “linear” amount. If I'm wearing my computer science hat, the complexity of operations for the general data center servers is O(N), where N = number of servers. It is linear because new servers do not require configuration changes to the existing servers.

Contrast this with the total awareness requirement for cluster servers. Newly added servers increase the burden on the operations team by an “exponential” amount because the existing servers must be reconfigured to be made aware of the new servers. In other words, the complexity of operating clusters is O(N2).

This level of coordination is what makes clusters the obvious choice for next-gen Big Data platforms like Hadoop and cloud architectures like OpenStack. Clusters deliver vastly greater speed, power, and agility, but as we now know it is what also makes clusters too complex to manage without an automated solution. 

 

 

Read More

Topics: hadoop cluster, hadoop, cluster management, big infrastucture, OpenStack

Automate the Way You Work With Spreadsheets

Posted by Anoop Rajendra on Jul 14, 2014 2:36:30 PM

This post discusses how to track data center topology by using spreadsheet applications like Microsoft Excel or Google Docs Spreadsheets. Many data center and network operators maintain the topology of services, appliances, hosts, configurations, etc. in spreadsheets. Since spreadsheets are a portable format, this allows them to track changes and move the data around with ease. It also allows the administrator to exert fine-grained control over the topology of their datacenter operations.

However, one of the challenges of maintaining data in spreadsheets is translating from spreadsheet to actual implementation. The administrator is required to read the data from the spreadsheet and manually type in commands on the console that will then bring the system up to the state that's described in the spreadsheet. This, even with experienced system administrators, is subject to error and failure. But how to leverage the advantages of the spreadsheet format and at the same time automate the process?

Enter StackIQ – The Automated Way to Work With Spreadsheets

If you have been following us for a while, you already know that here at StackIQ we believe that automation is the key to success in today’s enterprise data center, and if there is a way to automate it, we’ll find it. Here is how to leverage the information in a spreadsheet and eliminate the manual process involved.

If the data stored in the spreadsheet is in a compatible format (we’ll get to what formats are compatible later), StackIQ can ingest the spreadsheet directly into a running StackIQ Cluster Manager. Our software can then automatically translate the data into runnable commands. This way, the data stored in the spreadsheet is no longer just a  description, it is an actual representation of the desired state of the cluster.

There are two types of spreadsheets that StackIQ currently supports. One is the Hosts spreadsheet, and the other is the Attributes spreadsheet.

Hosts Spreadsheet

Let’s start with the Hosts spreadsheet. The Hosts spreadsheet, shown below, is used to add hosts to an existing StackIQ Cluster Manager[1]

Screen_Shot_2014-07-10_at_3.39.54_PM

As the spreadsheet shows, if we know the MAC addresses of the machines in our cluster, we can assign IP addresses, hostname, network information, rack and rank information, and appliance types to each of these machines. This allows the administrator fine-grained control over the cluster.

Importing a Hosts spreadsheet is a very simple process:

1. Download the spreadsheet on to StackIQ Cluster Manager. Let's call it hosts_config.csv

2. Run the command:

# rocks load hostfile file=hosts_config.csv

When a spreadsheet is ingested using the above command, the network information and hostnames in the spreadsheet are used to configure the hosts. If the administrator decides to change the naming or networking information, the spreadsheet is updated and the process is repeated - Ingest the spreadsheet, and re-install the hosts.

Attributes Spreadsheet

On a StackIQ Cluster Manager, the configuration information for the cluster, and properties of the hosts are maintained in a database as key-value pairs. These properties are called Attributes. The attributes follow a simple schema. They are hierarchical. In order, they are global attributes, appliance attributes, and host attributes - each taking precedence over the previous level in the hierarchy. These attributes can be manipulated using the StackIQ command line or using the spreadsheet.

The Attribute spreadsheet, shown below, is used to manipulate attributes on StackIQ Cluster Manager.

Screen_Shot_2014-07-10_at_3.42.10_PM

This simple spreadsheet shows the following for this cluster:

  • discover_start attribute is set to true in the global scope
  • nukedisks attribute is set to true for all compute appliances

  • compute-0-0, however, has the "nukedisks" attribute set to false, which overrides the appliance level attribute.

Importing the Attributes spreadsheet is as simple as the above process for the Hosts spreadsheet.

1. Download the spreadsheet on to the cluster manager. Let's call it attr_list.csv

2. Run the command:

# rocks load attrfile file=attr_list.csv

Now, we can set the hosts to install using the command:

# rocks set host boot compute ambari action=install 

Then, we power-cycle the hosts to let them install, and boot up into a running OS. And that’s it!

Try it for yourself - Download our software (free for up to 16 nodes), use the spreadsheet example from this post or create your own, and spin up a cluster.

In the future we'll plan to use spreadsheets to configure more services on the cluster. We’re working on configuring disk controllers, partitioning of disks, Hadoop Services like Ambari and Cloudera using spreadsheets. Stay tuned and check back for more developments.

Any questions or comments? Contact us @StackIQ

The StackIQ Team

[1] StackIQ Cluster Manager is the machine that provisions the OS on the backend nodes, and manages and monitors the installation.

Read More

Topics: automation, stackiq, data center topology, spreadsheets

Is OpenStack Ready for Primetime?

Posted by Matthias Ankli on Jul 8, 2014 11:07:26 AM

openstack-logo512Here at StackIQ we are all very excited about OpenStack and the possibilities that it creates for businesses that think about adopting the private cloud model.

OpenStack is gaining significant momentum in the enterprise data center and there is a common consensus among cloud architects that it’s now ready for prime time. Well, maybe there are a few doubters out there but what can you do about that, right?

Anyways, we wanted to share a few thoughts on what’s going on in the space, and what we have been working on in regards to OpenStack.

  • First of all, why private cloud? There are ample of options to get your cloud on by working with vendors like Amazon (AWS), Google, Microsoft, etc. but these are public cloud solutions. Specialty hardware or configuration requirements, regulatory requirements, network latency, and security concerns are just a few use cases where a private cloud solution is required to get the job done.
  • Data centers have been transitioning from proprietary solutions to open source software and less costly commodity hardware. We have seen this happening in Big Data, general data center applications, HPC, etc. Now the paradigm shift is starting in the cloud space as well.
  • In private cloud environments, VMWare for example has been a dominant player with over 50% of enterprises using VMWare products for cloud virtualization projects. It supposedly works out of the box, but maintaining a VMWare cloud is pricey. But up until now there were very few enterprise-grade alternatives available.
  • And here comes OpenStack – It’s a viable solution and like other open source projects, it levels the playing field for businesses of all shapes and sizes. But there is a caveat. Like everything open-source, OpenStack is a set of loosely coupled projects, and operating an OpenStack-powered cloud on your own brings its own set of challenges with deployment and management.

So, is OpenStack ready for business and can it become the standard for private cloud in the enterprise?

We say yes, OpenStack is ready for the enterprise data center and it will unlock new possibilities. But like we have seen with Big Data (Hadoop), the cornerstones of OpenStack’s success in the enterprise are Automation, Integration and Scalability.

Here are a few fundamental things to keep in mind if you have been playing around with the idea of jumping on the OpenStack bandwagon: 

  • Automate as much maintenance work as possible and free up the IT workforce to create new applications, rather than care and feeding for infrastructure.
  • The infrastructure must be compatible to integrate with a wide range of hardware and applications.
  • Easy to scale – infrastructure must remain flexible and stable enough to rapidly add additional capacity to meet business requirements

Many vendors are lining up to emerge as the leader as the OpenStack ecosystem continues to grow. Red Hat and many others back the project but we believe that Red Hat, with its experience in Linux and the footprint of RHEL in the enterprise, is clearly in a great position to take the lead. That’s why we certified our software suite with Red Hat and joined the Red Hat OpenStack Cloud Infrastructure ecosystem.

StackIQ’s holistic automation is helping to accelerate OpenStack adoption in the enterprise by reducing the resources needed for deployment and management. Just like StackIQ automates the deployment and management for Hadoop, we now offer the same capabilities for Red Hat OpenStack Platform customers.

Alcatel-Lucent already utilizes StackIQ for the deployment and management of Red Hat's OpenStack Platform for its CloudBand™ NFV (Network Functions Virtualization) Platform. Alcatel-Lucent’s technology is used by the largest telecommunications operators like T-Mobile, Telefonica and NTT. Read the announcement from earlier this year. Other heavy hitters like AT&T, Comcast, Gap and Disney have already deployed, or announced their intention to deploy OpenStack-powered clouds in the near future.

Together, StackIQ and Red Hat are committed to provide a best-of-breed solution to accelerate the adoption of OpenStack technology in the enterprise data center.

Ready to take the OpenStack plunge? Start with reading Dr. Bruno’s blog post on how to deploy and manage Red Hat OpenStack Platform with StackIQ. More questions? Talk to us. You like infographics? You just found the best OpenStack infographic on the web and you are welcome (thanks to IDG Conn­­ect and Red Hat). 

The StackIQ Team (@StackIQ)

Read More

Topics: Cloud, automation, OpenStack

How to Use Cloudera Enterprise 5 With StackIQ Cluster Manager

Posted by Greg Bruno on Apr 25, 2014 12:17:00 PM

(Note that these instructions are for Cloudera Enterprise 5. To use StackIQ Cluster Manager with previous Cloudera release, please visit the the blog post)

StackIQ takes a “software defined infrastructure” approach to provision and manage cluster infrastructure that sits below Big Data applications like Hadoop. In this post, we’ll discuss how this is done, followed by a step-by-step guide to installing Cloudera Manager on top of StackIQ’s management system.

Components:

The hardware used for this deployment was a small cluster: 1 node (i.e. 1 server) is used for the StackIQ Cluster Manager and 4 nodes are used as backend/data nodes. Each node has 2 disks and all nodes are connected together via 1Gb Ethernet on a private network. The StackIQ Cluster Manager node is also connected to a public network using its second NIC. StackIQ Cluster Manager has been used in similar deployments between 2 nodes and 4,000+ nodes.

Image 1 resized 600

 

Step 1: Install StackIQ Cluster Manager

The StackIQ Cluster Manager node is installed from bare metal (i.e. there is no prerequisite software and no operating system previously installed) by burning the StackIQ Cluster Core Roll ISO to DVD and booting from it (the StackIQ Cluster Core Roll can be downloaded from the Rolls section after registering). The Core Roll leads the user through a few simple forms (e.g., what is the IP address of the Cluster Manager, what is the gateway, DNS server) and then asks for a base OS DVD (for example, Red Hat Enterprise Linux 6.5; other Red Hat-like distributions such as CentOS are supported as well). The installer copies all the bits from both DVDs and automatically creates a new Red Hat distribution by blending the packages from both DVDs together.

The remainder of the Cluster Manager installation requires no further manual steps and this entire step takes between 30 to 40 minutes.

 

Step 2: Install the CDH Bridge Roll

StackIQ has developed software that “bridges” our core infrastructure management solution to Cloudera’s Hadoop distribution that we’ve named the CDH Bridge Roll. One feature of our management solution is that it records several parameters about each backend node (e.g., number of CPUs, networking configuration, disk partitions) in a local database. After StackIQ Cluster Manager is installed and booted, it is time to download and install the CDH Bridge Roll:

  • Log into the frontend as "root", download cdh-bridge ISO from here.

  • Then execute the following comands at the root prompt"

 # rocks add roll <path_to_iso>
 # rocks enable roll cdh-bridge
 # rocks create distro
 # rocks run roll cdh-bridge | sh

The cluster is now configured to install Cloudera packages on all nodes.

 

Step 3: Install Cloudera Manager and Cloudera CDH5 Roll

You can download a prepackaged Cloudera Manager here and a prepackaged Cloudera CDH5 from here.

We will now install these 2 ISOs.

rocks add roll cloudera-cdh5-6.5-0.x86_64.disk1.iso

rocks add roll cloudera-manager5-6.5-0.x86_64.disk1.iso

rocks enable roll cloudera-cdh5

rocks enable roll cloudera-manager5

rocks create distro

rocks run roll cloudera-cdh5 | sh

rocks run roll cloudera-manager5 | sh

 

Step 4: Install the backend nodes

Before we install the backend nodes (also known as compute nodes), we want to ensure that all disks in the backend nodes are optimally configured for HDFS. During an installation of a data node, our software interacts with the disk controller to optimally configure it based on the node’s intended role. For data nodes, the disk controller will be configured in “JBOD mode” with each disk configured as a RAID 0, a single partition will be placed on each data disk and a single file system will be created on that partition. For example, if a data node has one boot disk and 4 data disks, after the node installs and boots, you’ll see the following 4 file systems on the data disks: /hadoop01, /hadoop02, /hadoop03 and /hadoop04.

For more information on this feature, see our blog post Why Automation is the Secret Ingredient for Big Data Clusters.

Now we don’t want to reconfigure the controller and reformat disks on every installation, so we need to instruct the StackIQ Cluster Manager to perform this task the next time the backend nodes install. We do this by setting an attribute (“nukedisks”) with the rocks command line:

# rocks set appliance attr compute nukedisks true
# rocks set appliance attr cdh-manager nukedisks true

Now we are ready to install the backend nodes. First we put the StackIQ Cluster Manager into "discovery" mode using the CLI or GUI and all backend nodes are PXE booted. We will boot the first node as a cdh-manager appliance. The cdh-manager node will run the Cloudera Manager web admin console used to configure, monitor and manager CDH.

 describe the image

 After installing it shows up as below:

2 cdh manager after discovery (framed) resized 600

We will install all the other nodes in the cluster as compute nodes. StackIQ Cluster Manager discovers and installs each backend node in parallel (10 to 20 minutes) - no manual steps are required.

 describe the image

For more information on installing and using the StackIQ Cluster Manager (a.k.a., Rocks+), please visit StackIQ Support or watch the the demo video. 

After all the nodes in the cluster are up and running you will be ready to install Cloudera Manager. In this example, the StackIQ Cluster Manager node was named “frontend” and the compute nodes were assigned default names of compute-0-0, compute-0-1, compute-0-2 (3 nodes in Rack 0), and compute-1-0 (1 node in Rack 1).

 

Step 5: Install Cloudera Manager 

SSH into cdh-manager appliance, as root, execute:

# /opt/rocks/sbin/cm5/cloudera-manager-installer.bin --skip_repo_package=1

This will install Cloudera Manager with packages from our local yum repository as opposed to fetching packages over the internet.

 

Step 6: Select What to Install 

Log into the cdh-manager node http://<cdh-manager>:7180 (where ‘<cdh-manager>’ is the FQDN of your StackIQ Cluster Manager) with username admin and password admin

Image 2 (cropped) resized 600

 

Choose Cloudera Enterprise trial if you want to do a trial run

Image 3 copy resized 600

 

Click Continue in the screen below.

Image 4 framed resized 600

 

Specify list of hosts for CDH installation e.g., compute-0-[0-3],cdh-manager-0-0

Image 5 cropped resized 600

 After all the hosts are identified, hit Continue

Image 1 resized 600

Choose Use Packages and select CDH5 as the version in the screen below.

Cluster_Installation

 

Specify custom repository as the CDH release you want to install. Specifyhttp://<frontend>/install/distributions/rocks-dist/x86_64/ for the URL of the repository where <frontend> is the IP address of the cluster’s frontend.

Screen_Shot_2014-04-24_at_2.04.37_PM

In the example above, 10.1.1.1 was the IP address of the private eth0 interface on the frontend.

Choose All hosts accept same private key as the authentication method. Use Browse to upload the private key present in /root/.ssh/id_rsa on StackIQ Cluster Manager.

Image 9 copy resized 600

You will then see a screen where the progress of the installation will be indicated. After installation completes successfully, hit Continue.

Screen_Shot_2014-04-24_at_2.07.43_PM

 

You will then be directed to the following screen where all hosts will be inspected for correctness.

Screen_Shot_2014-04-24_at_2.08.49_PM

 

Choose a combination of services you want to install and hit Continue

Screen_Shot_2014-04-24_at_2.10.14_PM

  

Review that all services were successfully installed.

Screen_Shot_2014-04-24_at_2.11.32_PM

Finally your Hadoop services will be started.

Screen_Shot_2014-04-24_at_2.12.44_PM

  

Step 7: Run a Hadoop sample program

It is never enough to set up a cluster and the applications users need and then let them have at it. There are generally nasty surprises for both parties when this happens. A validation check is a requirement to make sure everything is working the way it is expected to.

Do this to check to test if the cluster is functional: 

  • Log into the the cdh-manager node as “root” via SSH or Putty.

  • On the command line, run the following map-reduce program as the “hdfs” user, which runs a simulation to estimate the value of pi based on sampling:

# sudo -u hdfs hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 10 10000

Output should look something like this.

Image 15 copy resized 600

Congratulations, you are done!

We’re certain you’ll find this the quickest way to deploy a cluster capable of running Cloudera Hadoop. Give it a shot and send us your questions!

The StackIQ Team

@StackIQ

Read More

Deploy and Manage Red Hat Enterprise Linux OpenStack Platform With StackIQ

Posted by Greg Bruno on Apr 15, 2014 10:03:00 AM

StackIQ has officially partnered with Red Hat to simplify the process of deploying Red Hat Enterprise Linux OpenStack Platform. StackIQ Cluster Manager is ideal for deploying the hardware infrastructure and application stacks of heterogeneous data center environments. With Red Hat Enterprise Linux OpenStack offering, StackIQ Cluster Manager handles the automatic bare-metal installation of heterogeneous hardware and correct configuration of the multiple networks required by OpenStack. StackIQ Cluster Manager also automatically deploys Red Hat Foreman which enables web-based driven cluster configuration on deployed nodes of OpenStack software. All OpenStack services are available for management and deployment via the OpenStack Dashboard, including Nova, Neutron, Cinder, Swift, etc. StackIQ Cluster Manager enables the ongoing management, deployment, and integration of Red Hat OpenStack services on growing and multi-use clusters.

StackIQ takes a “software defined infrastructure” approach to provision and manage cluster infrastructure that sits below applications like OpenStack and Hadoop. In this post, we’ll discuss how this is done, followed by a step-by-step guide to installing Red Hat Enterprise Linux OpenStack with StackIQ’s management system

Components:
The hardware used for this deployment was a small cluster: 1 node (i.e., 1 server) is used for the StackIQ Cluster Manager, 1 node is used as the Foreman server, and 3 nodes are used as backend/data nodes. Each node has 1 disk and all nodes are connected together via 1Gb Ethernet on a private network. StackIQ Cluster Manager, Foreman server, and OpenStack controller nodes are also connected to a corporate public network using the second NIC. Additional networks dedicated to OpenStack services can also be used but are not depicted in this graphic or used in this example. StackIQ Cluster Manager has been used in similar deployments between 2 nodes and 4,000+ nodes.
RH-OpenStack-Architchture
 

Step 1. Install StackIQ Cluster Manager

The StackIQ Cluster Manager node is installed from bare-metal (i.e., there is no pre-requisite software and no operating system previously installed) by burning the StackIQ Cluster Core Roll ISO to DVD and booting from it (the StackIQ Cluster Core Roll can be obtained from the “Rolls” section after registering at http://www.stackiq.com/download/). The Cluster Core Roll leads the user through a few simple forms (e.g., what is the IP address of StackIQ Cluster Manager, what is the gateway, DNS server, etc.) and then asks for a base OS DVD (for example, Red Hat Enterprise Linux 6.5; other Red Hat-like distributions such as CentOS are supported as well, but for Red Hat Enterprise Linux, only certified media is acceptable). The installer copies all the bits from both DVDs and automatically creates a new Red Hat distribution by blending the packages from both DVDs together.

The remainder of StackIQ Cluster Manager installation requires no further manual steps and this entire step takes between 30 to 40 minutes.

A detailed description of StackIQ Cluster Manager can be found in section 3 of the StackIQ Users Guide. It is strongly recommended that you familiarize yourself with at least this section before proceeding. (C’mon, really, it’s not that bad. The print is large and there are a bunch of pictures, shouldn’t take long.)

https://s3.amazonaws.com/stackiq-release/stack3/roll-cluster-core-usersguide.pdf

If you have further questions, please contact support@stackiq.com for additional informatio.

Step 2. Install the Red Hat Enterprise Linux OpenStack Bridge

StackIQ has developed software that “bridges” our core infrastructure management solution to Red Hat’s OpenStack Platform we’ve named the RHEL OpenStack Bridge Roll. The RHEL OpenStack Bridge Roll is used to spin up Foreman services by installing a Foreman appliance. This allows you to leverage RHEL’s Foreman OpenStack puppet integration to deploy a fully operational OpenStack Cloud.

StackIQ Cluster Manager uses the concept of “rolls” to combine packages (RPMs) and configuration (XML files which are used to build custom kickstart files) to dynamically add and automatically configure software services and applications.

The first step is to install a StackIQ Cluster Manager as a deployment machine. This requires that you use, at a minimum, the cluster-core and RHEL 6.5 ISOs. It’s not possible to add StackIQ Cluster Manager on an already existing RHEL 6.5 machine. You must start with the installation of StackIQ Cluster Manager. The rhel-openstack-bridge roll can be added once the StackIQ Cluster Manager is up. We will also 

It is highly recommended that you check the MD5 checksums of the downloaded media

You must burn the cluster-core roll and RHEL Server 6.5 ISOs to disk, or, if installing via virtual CD/DVD, simply mount the ISOs on the machine's virtual media via the BMC.

Then follow this https://s3.amazonaws.com/stackiq-release/stack3/roll-cluster-core-usersguide.pdf for instructions on how to install StackIQ Cluster Manager in section 3. (Yes! I mentioned it again.)

What You’ll Need:

  • After StackIQ Cluster Manager is installed and booted, add the Red Hat Enterprise Linux OpenStack Bridge, create the Red Hat Enterprise Linux OpenStack roll, and create an updated Red Hat Enterprise Linux Server distribution roll.
  • Creating the 
Copy the roll to a directory on the StackIQ Cluster Manager. "/export" is a good place as it should be the largest partition.

Verify the MD5 checksums:

# md5sum rhel-openstack-bridge-1.0-0.x86_64.disk1.iso

should return f7a2e2cef16d63021e5d2b7bc2b16189

Then execute the following commands on the frontend:

# rocks add roll rhel-openstack-bridge*.iso 
# rocks enable roll rhel-openstack-bridge
# rocks create distro
# rocks run roll rhel-openstack-bridge | sh

The OpenStack bridge roll will enable you to set-up a Foreman appliance which will then be used to deploy OpenStack roles. 

Step 3. Completing the Red Hat Enterprise Linux 6.5 and OpenStack Platform Deployment
We need to get the Red Hat Enterprise Linux OpenStack Platform and Red Hat Enterprise Linux 6.5 updates before a full deployment is possible. The latest version of Red Hat Enterprise Linux OpenStack Platform requires updates only available in the Red Hat Enterprise Linux 6.5. Fortunately, if you have a Red Hat Subscription license for these two components, creating and adding these as rolls is easy with StackIQ Cluster Manager. The only caveat is this will take some time, depending on your network. Make a pot of coffee, get some donuts, and proceed with the steps below to make the required rolls. 

What You’ll Need:

  • If the StackIQ Cluster Manager has web access, enable your Red Hat Enterprise Linux 6.5 and Red Hat Enterprise Linux OpenStack Platform subscriptions with subscription-manager on the StackIQ Cluster Manager. Explaining how to do this is out of scope for this document. Please refer to the Red Hat documentation on how to do this: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Installation_Guide/s1-steps-rhnreg-x86.html (You'll have to do a "# yum -y install subscription-manager subscription-manager-gui" first.)
    • OR
  • A valid subscription via Satellite Server for your company. If the StackIQ Cluster Manager has access to your company's subscriptions via a company or subnet Satellite server, you can create the the required rolls from the repository URLs available from you company's Satellite server. 
  • Once you have properly subscribed StackIQ CM obtain the repoids needed:
    • The repoid for Red Hat Enterprise Linux OpenStack Platform: in this example, since the StackIQ CM does have web access the url we will use is: "rhel-6-server-openstack-4.0-rpms" (If your system is properly configured for subscription, this can be obtained by running "# yum repolist enabled" on the StackIQ CM command line.)
    • The repoid for Red Hat Enterprise Linux 6 Server: in this example, since the StackIQ CM does have web access the url we will use is: "rhel-6-server-rpm" (Also obtained by running "# yum repolist enabled" on the command line.
Creating the rolls:
StackIQ Cluster Manager can create a roll that encapsulates all the required RPMS for a repository. This allows for multiple rolls and multiple distributions to be run on different sets of hardware. At this basic level, know that we are creating a roll that will allow us to have all the RPMS available to us during kickstart of the backend nodes to fully deploy the Foreman/OpenStack infrastructure.
Prior to creating the rolls, let's check our known roll state. On the StackIQ CM command line, list the current rolls:
# rocks list roll 
Screen_Shot_2014-06-12_at_1.41.27_PM
We have the two rolls added at install time: RHEL and rhel-openstack-bridge. Now we want to create the rolls that will allow us to add the Red Hat Enterprise Linux 6 Server updates and Red Hat Enterprise Linux OpenStack Platform.
"cd" to /export on the StackIQ CM. This is the largest partition and a good place to pull down these repositories. Then using the repoid you can obtain by running:
# cd /export
# yum repolist enabled
Do:
# rocks create mirror repoid=rhel-6-server-openstack-4.0-rpms rollname=rhel-openstack-4.0
And breath or drink coffee or check email or write a letter to your mother (Hey, she probably hasn't heard from you in a while, am I right?). This is going to take some time depending on your network.
When all the command are run and completed, you'll have an ISO with all the Red Hat Enterprise Linux OpenStack Platform RPMs in a directory that is the same name as the "repoid" above. The ISO in that directory will have the "rollname" as the ISO.
# ls rhel-6-server-openstack-4.0-rpms
All that will look like this:
Screen_Shot_2014-06-12_at_1.57.19_PM
Let's add the roll to the distribution:
# rocks add roll rhel-6-server-openstack-4.0-rpms/rhel-openstack-4.0-6.5.update1-0.x86_64.disk1.iso
And then enable it using the name listed in the first column of a "# rocks list roll"
# rocks enable roll rhel-openstack-4.0
It looks like this:
Screen_Shot_2014-06-12_at_2.32.11_PM
Now we'll do the same thing to get the most recent Red Hat Enterprise Linux 6 Server RPMs so we can take advantage of the full set of updates for that distribution. (Covers the OpenSSL Heartbleed bug and updates some RPMs required for the latest version of Red Hat Enterprise Linux OpenStack Platform.) 
This is the same as the preceding process, so we'll just show the commands and an ending screenshot.
# rocks create mirror repoid=rhel-6-server-rpms name=RHEL-6-Server-Updates-06122014
This will take more time than the OpenStack repository. If you didn't write your mother then, do it now. More coffee is always an option. So is a second (or third!) donut. Lunch might be in order, a longish one. 
(The "name" parameter will enable us to keep track of repository mirrors by date. This allows us to add updates on a roll basis without overwriting previous distributions. Testing new distributions becomes easy this way by assigning rolls to new distributions and machines to the distribution, providing delineation between production and test environments. If this doesn't make sense, don't worry, it's getting into cluster life-cycle management, and you'll understand it when you have to deal with it.)
Once we have the repo, a roll has been created. Let's add it:
# rocks add roll rhel-6-server-rpms/rhel-6-server-rpms-6.5.update1-0.x86_64.disk1.iso
# rocks list roll
to check the name.
Enable it:
# rocks enable roll rhel-6-server-rpms
Disable the original RHEL roll. Since the Updates roll contains all that was old and all that is new.
# rocks disable roll RHEL
Check it:
# rocks list roll
This is how all that looks on the command line:
Screen_Shot_2014-06-12_at_10.45.54_PM

Now recreate the distribution the backend nodes will install from. This creates one repository for kickstart to pull from during backend node installation or during yum updates of individual packages. 
# rocks create distro
Breath. Sign your mother's letter and address the envelope. The distro create should be done by then because you probably have to find her address after all these years of not writing her.
Starts like this:
Screen_Shot_2014-06-12_at_10.49.13_PM
And ends like this:
Screen_Shot_2014-06-12_at_10.55.25_PM-1
Update the StackIQ Cluster Manager
Now we're going to update the StackIQ Cluster Manager at this point before installling any backend nodes. It's good hygiene and gets us running the lastest and great Red Hat Enterprise Linux 6.5.
# yum -y update
Then reboot when it's done. Once the machine comes back up, you can install the Foreman server and then the compute nodes. The steps to do that come next.
Step 4. Install the Foreman Appliance

StackIQ Cluster Manager contains the notion of an “appliance.” An appliance has a kickstart structure that installs a preconfigured set of RPMS and services that allow for concentrated installation of a particular application. The bridge roll provides a “Foreman” appliance that sets up the automatic installation of the Red Hat Foreman server with the required OpenStack infrastructure. It’s the fastest way to get a Foreman server up and running.

Installing Backend Nodes Using Discovery Mode in the StackIQ Cluster Manager GUI

“Discovery” mode allows the automatic installation of backend nodes without pre-populating the StackIQ Cluster Manager database with node names, IP addresses, MAC addresses, etc. The StackIQ Cluster Manager runs DHCP to answer and install any node making a PXE request on the subnet. This is ideal on networks when you, a) have full control of the network and the policies on the network and, b) you don’t care about the naming convention of your nodes. If one of these is not true, please follow the instructions for populating the database in the “Install Your Compute Nodes Using CSV Files” in the cluster-core roll documentation reference above (Section 3.4.2).

“Discovery” mode is no longer turned on by default, as it may conflict with a company’s networking policy. To turn on Discovery mode, in a terminal or ssh session on StackIQ Cluster Manager do the following:

# rocks set attr discover_start true

To turn it off after installation if you wish:

# rocks set attr discover_start false

DHCP is always running but with “discover_start” set to “false,” it will not promiscuously answer PXE requests. (In the next release this will simply be a button to turn on and off "discovery.")

With Discovery turned on, you can perform installation of backend nodes via the GUI or via the command line. To install via the GUI go the StackIQ Cluster Manager GUI at http://<StackIQ Cluster Manager hostname or IP>

 

Click the Login link and login as “root” with the password set during installation for “root”
Go to the “Discover” tab:

Click on Appliance, and choose “Foreman” and click “Start.”

 

Boot the server you are using as the Foreman server. All backend nodes should be set to PXE first on the network interface attached to the private network. This is a hard requirement.

In the GUI, you should see a server called “foreman-0-0” appear in the dialog, and in sufficient time, the Visualization area in the "Discover" tab will indicate the network traffic being used during installation.

The Foreman server appliance installation is somewhat chatty. You’ll receive status updates in the Messages box at the bottom of the page for what is happening on the node. The bare metal installation of the Foreman server is relatively short, about 20 minutes depending on the size of the disks being formatted. The installation of the Foreman application takes longer and happens after the initial boot due to RPM packaging constraints of the Foreman installer. It should be done, beginning to end, in about an hour.

When the machine is up, the indicator next to it’s name will be green and there will be a message in the alerts box indicating the machine has installed Foreman.

Using the command line:

If for some reason you do not have access to the front-end web GUI or access is extremely slow, or if you just happen to be a command line person, there is a command to do discovery of backend resources.

To install a Foreman appliance:

# insert-ethers

Choose “Foreman” and choose “OK”

 

Boot the machine and it should be discovered, assuming PXE first.

Once the Foreman server is installed, you can access it’s web interface by running Firefox on StackIQ Cluster Manager. It should be available at the IP address listed in a:

# rocks list host interface foreman-0-0

Adding an additional interface

If you want it accessible on the public or corporate network and not just on the private network, it will be necessary to add another network interface attached to the public network.

If the interface was detected during install:

# rocks set host interface ip

# rocks set host interface subnet public

If you add the interface after the fact:

# rocks add host interface help

And fill in the appropriate fields.

In either event, to make the network change live, sync the network:

# rocks sync host network foreman-0-0 restart=yes

This procedure is more clearly delineated in section 4.3 of the cluster-core roll documentation, referenced (twice!) above.

Step 5. Install the Backend Nodes

Before we install the backend nodes (also known as “compute nodes”), we want to ensure that all disks in the backend nodes are configured and controlled by the StackIQ Cluster Manager. On node reinstall, this prevents the inadvertent loss of data on disks that are not the system disk. Now, we don’t want to reconfigure the controller and reformat disks on every installation, so we need to instruct the StackIQ Cluster Manager to perform this task the next time the backend nodes install. We do this by setting an attribute (“nukedisks”):

# rocks set appliance attr compute nukedisks true

After node reinstallation, this attribute is automatically set to “false,” so the only way to reformat non-system disks, is to deliberately set this attribute to “true” before node reinstall.

Now we are ready to install the backend nodes. This is the same procedure that we used to install the Foreman server. This time, however, choose “Compute” as the appliance, whether you are using the web GUI or the CLI command “insert-ethers”.

Make sure the StackIQ Cluster Manager is in "discovery" mode using the CLI or GUI and all backend nodes are PXE booted. StackIQ Cluster Manager discovers and installs each backend node in parallel, packages are installed in parallel, and disks on the node are also formatted in parallel. All this parallelism allows us to install an entire cluster, no matter the size, in about 10 to 20 minutes -- no manual steps are required. For more information on installing and using the StackIQ Cluster Manager, please visit http://www.stackiq.com/support/ or http://www.youtube.com/stackiq. Please review the above video and section 3.4 of the cluster-core roll documentation for questions.

After all the nodes in the cluster are up and running, you will be ready to deploy OpenStack via the Foreman web interface. In this example, the StackIQ Cluster Manager node was named “kaiza” and the foreman server was named “foreman-0-0.” The compute nodes were assigned default names of compute-0-0, compute-0-1, compute-0-2.

This is how it looks on the GUI when all the installs are completed.

Step 5. Configuring Foreman to Deploy OpenStack

The Foreman server as supplied by RHEL contains all the puppet manifests required to deploy machines with OpenStack roles. With the backend nodes installed and properly reporting to Foreman, we can go to the Foreman web GUI and configure the backend nodes to run OpenStack.

The example here will be for the simplest case: a Nova Network Controller using a single network, and a couple of hypervisors running the Cirros cloud image.

More complex cases (Neutron, Swift, Cinder) will follow in the next few weeks as appendices to this document. Feel free to experiment ahead of those instructions, however.

1. Go to https://

Choose “Proceed Anyway” or, if in Firefox, accept the certificate, if the security certificate is not trusted.
Untitled.png

You should get a login screen:

Log_in_screen

2. Login, the default username is “admin” and the default password is “changeme” Take the time to change the password once you log in, especially if the Foreman server is available to the outside world.

3. Add a controller node

You should see all the nodes you’ve installed listed on the Foreman Dashboard. Click on the “Hosts” tab to go to the hosts window.
Untitled 6.png

Click on the machine you intend to use as a controller node. You will have to change some parameters to reflect the network you are using for OpenStack (in this example, the private one).

 
2nd_screenshot

It’s highly recommended this machine also have a connection to the external network (www or corporate internet) to simplify web access. See “Adding an additional interface” above on how to do that. Do not choose the Foreman server as a controller node. The OpenStack Dashboard overwrites httpd configuration files and will disable the ability to log into the Foreman web server. However, If you have a small cluster, you can add the Foreman server as an OpenStack Compute node, as we do in this example. You may not want to do that in a larger cluster though. Separation of services is almost always a good thing.

Click on the host, we will use “compute-0-0.” When the “compute-0-0” page comes up, click on “Edit.”
 
Untitled 4.png
You should see a page called “Edit compute-0-0.local.” Set the “Host Group” tab to “Controller (Nova Network).” (An example of Neutron networking will follow in later Appendices to this document.)
Edit_compute_00

Click on the “Parameters” tab. There are a lot of parameters here, but we will change the minimum to reflect our network.

Click the “Override” button next to the following parameters:

controller_priv_host

controller_pub_host

mysql_host

qpid_host

These parameters will be listed at the bottom of the page with text fields to change them. The controller_priv_host, mysql_host, and qpid_host, should all be changed to the private interface IP of the controller node, i.e. the machine you are editing right now.

The controller_pub_host should be the IP address of the public interface (if you have added one) of the controller node, i.e. the machine you are editing right now.

If you don’t know the IP address of the controller node, in a terminal on the StackIQ Cluster Manager, do the following
Screen Shot 2014-03-27 at 2.58.57 PM.png
The IP address for the controller_pub_network, in this instance, is on eth2, we set it that way and cabled it to the corporate external network, and has IP address 192.168.1.60.

This can been seen as below. Once you’ve made the changes, click “Submit.”

 
host3.jpeg
Going back to the “Hosts” tab, you should see that “compute-0-0.local” is has the “Controller (Nova Network)” role.
Screen Shot 2014-03-27 at 3.01.41 PM.png
There is a puppet agent that runs on each machine. It runs every 30 minutes. This will automatically update the machine’s configuration and make it the OpenStack Controller. If you don’t want to wait that long, start the puppet process yourself from StackIQ Cluster Manager. (Alternatively, you can ssh to compute-0-0 and manually run “puppet agent -tv”.)

Once the puppet run finishes, you can add OpenStack Computes. (The puppet run on the controller node can take awhile to execute.)

Add OpenStack Compute Nodes

There isn’t much for an OpenStack Controller to do if it can’t launch instances of images, so we need a couple of hypervisors. We’ll do this a little differently than the Controller node, where we edited one individual machine, and instead, edit the “Host Group” we want the computes to run as. This allows us to make the changes once and apply them to all the machines.

Go to “More” and choose “Configuration” from the drop down, then click on “Host Groups” in the next drop down.

Click on “Compute (Nova Network)” and it will bring you to an “Edit Compute (Nova Network)” screen:

Choose the “Parameters” tab:
 

We’re going to edit a number of fields, similar to the the Controller node. Click the “override” button on each of the following parameters and edit them at the bottom of the page:

controller_priv_host - set to private IP address of controller

controller_pub_host - set to public IP address of controller

mysql_host - set to private IP address of controller

qpid_host - set to private IP address of controller

nova_network_private_iface - the device of the private network interface

nova_network_public_iface

nova_network_private_iface

The nova_network_*_iface default to em1 and em2. These may work on the machines in your cluster, and you may not have to change them. Since the test cluster is on older hardware, eth0, eth1, and eth2 are where the networks sit. So for this test cluster, the appropriate changes are as below. The test cluster needs the eth2 interface for the public network because it is using the foreman-0-0 as a compute node. If your Foreman node is not part of your test cluster, you may not need to change this.

More advanced networking configurations, i.e. when using multiple networks or using Neutron, may require additional parameters.

Click submit. Any host that is listed with the “Compute (Nova Network)” role, will inherit these parameters.

Now lets add the hosts that will belong to the “Host Group” Compute (Nova Network).

Go to the “Hosts” tab once again, and choose all the hosts that will run as Nova Network Computes. In this example, since it’s such a small cluster, we’ll add the “foreman-0-0” machine as an OpenStack Compute:
Screen Shot 2014-03-27 at 3.13.05 PM.png

Now click on “Select Action” and choose “Change Group.”

 
Screen Shot 2014-03-27 at 3.16.45 PM.png

Click on “Select Host Group” and choose “Compute (Nova Network)” then click “Submit.”

 

The hosts should show the group they’ve been assigned to:

 

Again, you can wait for the Puppet run or spawn it yourself from StackIQ Cluster Manager. Since we have a group of machines, we will use “rocks run host” to spawn “puppet agent -tv” on all the machines:

 
If we had chosen only the Compute nodes for OpenStack Compute role and not the Foreman node, we could do this on just the computes by specifying their appliance type:

Once puppet has finished, log into the OpenStack Controller Dashboard to start using OpenStack.

Using OpenStack

To access the controller node, go to http://<controller node ip> . This is accessible on either the public IP you configured for this machine or at the private IP. If you have only configured this on the private IP, you’ll have to open a browser from StackIQ Cluster Manager or port forward SSH to the private IP from your desktop.

The username is “admin” and the password was randomly generated during the Controller puppet install. To get this password, go to the Foreman web GUI, click on the “Hosts” tab and click on the host name of the Controller host: 

 

 The click “Edit” and go to the “Parameters” tab:

 

Copy the “admin_password” string:

 

 

Paste it into the password field on the OpenStack Dashboard and click “Submit.”

 

 

You should now be logged into the OpenStack Dashboard

 

 

Click on “Hypervisors,” you should see the three OpenStack compute nodes you’ve deployed.

 

As a simple example, we’ll deploy the Cirros cloud image that OpenStack uses in their documentation.

Click on “Images.”

 

Click on “Create Image” and you’ll be presented with the image configuration window.

 

Fill in the required information:

Name - we’ll just use “cirros”

Image Source - use default “Image Location”

Image Location - http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img

 

Why do we know this? Because I looked it up here: http://docs.OpenStack.org/image-guide/content/ch_obtaining_images.html

Format - QCOW2

And make it “Public”

Then click “Create Image.”

The image will show a status of “Queued.”

 

And when it’s downloaded and available to create Instances, it will be labeled as “Active.”

 

Cool! Now we can actually launch an instance and access it.

Adding an Instance:

Click on “Project” then on “Instances” in the sidebar:

 

Click on “Launch Instance.”

 

Fill out the parameters:

Availablility Zone - nova, default

Instance Name - we’ll call it cirros-1

Flavor - m1.tiny, default

Instance Count - 1, default

Instance Boot Source - Select “Boot from Image”

Image Name - Select “cirros”

 

It should look like this:

 

Now click "Launch."
You should see a transient “Success” notification on the OpenStack Dashboard and then the instance should start spawning.
When instance is ready for use, it will show as “Active” with power state “Running,” and log-in should work. (Cirros login is “cirros” and password is “cubswin:)” with the emoji.)

Logging into the Instance

In this simple example, to log into the instance, you must log into the hypervisor where the instance is running. Subsequent blog posts will deal with more transparent access for users.

 

To find out which hypervisor your instance is running on, go to the “Admin” panel from the left sidebar and click on “Instances.”

Untitled

 

We can see the instance is running on compute-0-1 with a 10.0.0.2 IP. So from a terminal on the frontend, ssh into the hypervisor compute-0-1.

Untitled_3

 

Now log into the instance as user “cirros” with password “cubswin:)” (The password includes the emoji.)

 

 

Untitled_4

 

Now you can run Linux commands to prove to yourself you have a functioning instance:

Untitled_5

 

Reinstalling

There are times when a machine needs to be reinstalled: hardware changes or repair, uncertainty about a machine’s state, etc. A reinstall generally takes care of these issues. The goal of StackIQ Cluster Manager is to have software homogeneity across heterogeneous hardware. StackIQ Cluster Manager allows you to have immediate consistency of your software stacks on first boot. One of the ways we do this is by making reinstallation of your hardware as fast as possible (reinstalling 1000 nodes is about as fast as reinstalling 10.) and correct when a machine comes back up.

One of the difficulties with the OpenStack puppet deployment is certificate management. When a machine is first installed and communicates with Foreman, a persistent puppet certificate is created. When a machine is re-installed or replaced, the key needs to be removed in order for the machine to resume its membership in the cluster. StackIQ Cluster Manager takes care of this by watching for reinstallation events and communicating with the Foreman server to remove the puppet certificate. When the machine finishes installing, the node will rejoin the cluster automatically. In the instance of a reinstall, if the OpenStack role has been set for this machine, the node will do the appropriate puppet run and rejoin OpenStack in the assigned role, and you really don’t have to do anything special for that to happen.

To reinstall a machine:

# rocks run host “/boot/kickstart/cluster-kickstart-pxe”

or

# rocks set host boot action=install

# rocks run host “reboot”

If you wish to start with a completely clean machine and don’t care about the data on it, set the “nukedisks” flag to true before doing one of the above installation commands:

# rocks set host attr nukedisks true

Multi-use clusters
StackIQ Cluster Manager has been used to run multi-use clusters with different software stacks assigned to different sets of machines. The OpenStack implementation is like that. If you want to allocate machines for another application and you’re using the RHEL OpenStack Bridge roll, then you can turn off OpenStack deployment on certain machines, and they will not be set-up to participate in the OpenStack environment. To do this, simply do the following:

# rocks set host attr has_openstack false

The bridge roll sets every compute node to participate in the OpenStack distribution. Throwing this flag for a host means the machine will not participate in the OpenStack deployment. If the machine was first installed with OpenStack, then you will have to reinstall after setting this attribute.

Updates

Red Hat provides updates Red Hat Enterprise Linux OpenStack Platorm and to Red Hat Enterprise Linux Server regularly. StackIQ tracks these updates and will provide updated rolls for critical patches or service updates to Red Hat Enterprise Linux OpenStack Platform. Additionally, if your frontend is properly subscribed to RHN or to Subscription manager, these updates can be easily pulled and updated with the "rocks create mirror" command. Updating and subscription management deserves a blog post of its own and will be forthcoming.

Next Steps

Admittedly, we are documenting the simplest use case - Nova Networking on a single network. This is not ideal for production systems, but by now, you should be able to see how you can use the different components, StackIQ Cluster Manager, Foreman, and OpenStack Dashboard to easily configure and deploy OpenStack. Adding complexity can be done as you explore the Red Hat Enterprise Linux OpenStack Platform ecosystem to fit your company’s needs.

In the future, we will provide further documentation on deploying Neutron, Swift, and Cinder. Additionally, layering OpenStack roles (Swift and Compute, for instance) will be topics we will be exploring and blogging about as we move forward with Red Hat Enterprise Linux OpenStack Platform. Stay tuned!

Resources

StackIQ:
Using StackIQ Cluster Manager for deploying clusters: https://s3.amazonaws.com/stackiq-release/stack3/roll-cluster-core-usersguide.pdf. Video: https://www.youtube.com/watch?v=gVPZcA-yHQY&list=UUgg-AnfqnNCp-DxpVEfJkuA

Red Hat:
Red Hat Enterprise Linux OpenStack Platform Documentation: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/

We’re happy to answer questions on installing, configuring, and deploying Red hat Enterprise Linux OpenStack Platform with StackIQ Cluster Manager. Please send email to support@stackiq.com.
Greg Bruno, Ph.D., VP of Engineering, StackIQ
@StackIQ
Read More

Update: StackIQ Cluster Manager Now Integrated With Cloudera

Posted by Greg Bruno on Apr 8, 2014 4:00:00 PM

Updated: 4/8/2014 (Note that these instructions are for Cloudera Enterprise 4. To use StackIQ Cluster Manager with Cloudera Enterprise 5, please contact support@stackiq.com)

StackIQ takes a “software defined infrastructure” approach to provision and manage cluster infrastructure that sits below Big Data applications like Hadoop. In this post, we’ll discuss how this is done, followed by a step-by-step guide to installing Cloudera Manager on top of StackIQ’s management system.

Components:

The hardware used for this deployment was a small cluster: 1 node (i.e. 1 server) is used for the StackIQ Cluster Manager and 4 nodes are used as backend/data nodes. Each node has 2 disks and all nodes are connected together via 1Gb Ethernet on a private network. The StackIQ Cluster Manager node is also connected to a public network using its second NIC. StackIQ Cluster Manager has been used in similar deployments between 2 nodes and 4,000+ nodes.

Image 1 resized 600

 

 

 

 

 

 

 

 

 

 

 

 

 

Step 1: Install StackIQ Cluster Manager

The StackIQ Cluster Manager node is installed from bare metal (i.e. there is no prerequisite software and no operating system previously installed) by burning the StackIQ Cluster Core Roll ISO to DVD and booting from it (the StackIQ Cluster Core Roll can be downloaded from the Rolls section after registering). The Core Roll leads the user through a few simple forms (e.g., what is the IP address of the Cluster Manager, what is the gateway, DNS server) and then asks for a base OS DVD (for example, Red Hat Enterprise Linux 6.5; other Red Hat-like distributions such as CentOS are supported as well). The installer copies all the bits from both DVDs and automatically creates a new Red Hat distribution by blending the packages from both DVDs together.

The remainder of the Cluster Manager installation requires no further manual steps and this entire step takes between 30 to 40 minutes.

 

Step 2: Install the CDH Bridge Roll

StackIQ has developed software that “bridges” our core infrastructure management solution to Cloudera’s Hadoop distribution that we’ve named the CDH Bridge Roll. One feature of our management solution is that it records several parameters about each backend node (e.g., number of CPUs, networking configuration, disk partitions) in a local database. After StackIQ Cluster Manager is installed and booted, it is time to download and install the CDH Bridge Roll:

  • Log into the frontend as "root", download cdh-bridge ISO from here.

  • Then execute the following comands at the root prompt"

 # rocks add roll <path_to_iso>
 # rocks enable roll cdh-bridge
 # rocks create distro
 # rocks run roll cdh-bridge | sh

The cluster is now configured to install Cloudera packages on all nodes.

 

Step 3: Install Cloudera Manager and Cloudera CDH4 Roll

You can download a prepackaged Cloudera Manager here and a prepackaged Cloudera CDH4 from here.

We will now install these 2 ISOs. 

 rocks add roll cloudera-cdh4/cloudera-cdh4-6.5-0.x86_64.disk1.iso
 rocks add roll cloudera-manager/cloudera-manager-6.5-0.x86_64.disk1.iso
 rocks enable roll cloudera-cdh4
 rocks enable roll cloudera-manager
 rocks create distro
 rocks run roll cloudera-cdh4 | sh
 rocks run roll cloudera-manager | sh

 

Step 4: Install the backend nodes

Before we install the backend nodes (also known as compute nodes), we want to ensure that all disks in the backend nodes are optimally configured for HDFS. During an installation of a data node, our software interacts with the disk controller to optimally configure it based on the node’s intended role. For data nodes, the disk controller will be configured in “JBOD mode” with each disk configured as a RAID 0, a single partition will be placed on each data disk and a single file system will be created on that partition. For example, if a data node has one boot disk and 4 data disks, after the node installs and boots, you’ll see the following 4 file systems on the data disks: /hadoop01, /hadoop02, /hadoop03 and /hadoop04.

For more information on this feature, see our blog post Why Automation is the Secret Ingredient for Big Data Clusters.

Now we don’t want to reconfigure the controller and reformat disks on every installation, so we need to instruct the StackIQ Cluster Manager to perform this task the next time the backend nodes install. We do this by setting an attribute (“nukedisks”) with the rocks command line:

# rocks set appliance attr compute nukedisks true
# rocks set appliance attr cdh-manager nukedisks true

Now we are ready to install the backend nodes. First we put the StackIQ Cluster Manager into "discovery" mode using the CLI or GUI and all backend nodes are PXE booted. We will boot the first node as a cdh-manager appliance. The cdh-manager node will run the Cloudera Manager web admin console used to configure, monitor and manager CDH.

 describe the image

 After installing it shows up as below:

2 cdh manager after discovery (framed) resized 600

We will install all the other nodes in the cluster as compute nodes. StackIQ Cluster Manager discovers and installs each backend node in parallel (10 to 20 minutes) - no manual steps are required.

 describe the image

For more information on installing and using the StackIQ Cluster Manager (a.k.a., Rocks+), please visit StackIQ Support or watch the the demo video. 

After all the nodes in the cluster are up and running you will be ready to install Cloudera Manager. In this example, the StackIQ Cluster Manager node was named “frontend” and the compute nodes were assigned default names of compute-0-0, compute-0-1, compute-0-2 (3 nodes in Rack 0), and compute-1-0 (1 node in Rack 1).

 

Step 5: Install Cloudera Manager 

SSH into cdh-manager appliance, as root, execute:

# /opt/rocks/sbin/cloudera-manager-installer.bin --skip_repo_package=1

This will install Cloudera Manager with packages from our local yum repository as opposed to fetching packages over the internet.

Step 6: Select What to Install 

Log into the cdh-manager node http://<cdh-manager>:7180 (where ‘<cdh-manager>’ is the FQDN of your StackIQ Cluster Manager) with username admin and password admin

Image 2 (cropped) resized 600

 

 

 

 

 

 

 

 

 

Choose Cloudera Enterprise trial if you want to do a trial run

Image 3 copy resized 600

 

 

 

 

 

 

 

 

 

 

 

 

The GUI will now prompt you to restart Cloudera Manager server. Run the following command on cdh-manager node.

# service cloudera-scm-server restart

After restarting the server, you will be asked to login again. Click Continue in the screen below.

Image 4 framed resized 600

 

 

 

 

 

 

 

 

 

 

 

 

Specify list of hosts for CDH installation e.g., compute-0-[0-3],cdh-manager-0-0

Image 5 cropped resized 600

 

 

 

 

 

 

 

 

After all the hosts are identified, hit Continue

Image 1 resized 600

 

 

 

 

 

 

 

 

 

 

Choose Use Packages and select CDH4 as the version in the screen below.

Image 7 copy copy resized 600

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Specify custom repository as the CDH release you want to install. Specify http://<frontend>/install/distributions/rocks-dist/x86_64/ for the URL of the repository where <frontend> is the IP address of the cluster’s frontend.

describe the image

In the example above, 10.1.1.1 was the IP address of the private eth0 interface on the frontend.

Choose All hosts accept same private key as the authentication method. Use Browse to upload the private key present in /root/.ssh/id_rsa on StackIQ Cluster Manager.

Image 9 copy resized 600

 

 

 

 

 

 

 

 

 

 

 

 

 

 

You will then see a screen where the progress of the installation will be indicated. After installation completes successfully, hit Continue.

Image 10 copyn(cropped) copy resized 600

 

 

 

 

 

 

 

 

 

You will then be directed to the following screen where all hosts will be inspected for correctness.

Image 11 copy resized 600

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Choose a combination of services you want to install and hit Continue

Image 12 cropped resized 600

 

 

 

 

 

 

 

 

 

 

 

Review that all services were successfully installed.

Image 13 copy resized 600

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Finally your Hadoop services will be started.

Image 14 (cropped) copy resized 600

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  

Step 7: Run a Hadoop sample program

It is never enough to set up a cluster and the applications users need and then let them have at it. There are generally nasty surprises for both parties when this happens. A validation check is a requirement to make sure everything is working the way it is expected to.

Do this to check to test if the cluster is functional: 

  • Log into the the frontend as “root” via SSH or Putty.

  • On the command line, run the following map-reduce program as the “hdfs” user, which runs a simulation to estimate the value of pi based on sampling:

# sudo -u hdfs hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 10 10000

Output should look something like this.

Image 15 copy resized 600

Congratulations, you are done!

We’re certain you’ll find this the quickest way to deploy a cluster capable of running Cloudera Hadoop. Give it a shot and send us your questions!

The StackIQ Team

@StackIQ

Read More

Topics: hadoop cluster, hadoop, hadoop management, hadoop startup, big data, cloudera

Web-Scale-IT… What the What?

Posted by Matthias Ankli on Feb 17, 2014 9:38:00 AM

With a constantly increasing amount of data and more complex application requirements, talk about so-called Web-scale IT architecture is on the rise. But what exactly does Web-scale IT mean?

The research firm Gartner introduced the term Web-scale IT in an effort to describe what thedescribe the image fine folks at Internet giants like Facebook, Google, LinkedIn, etc. have achieved in agility and scalability by applying new processes, architectures, and practices. These companies exceed the “scale in terms of sheer size to also include scale as it pertains to speed and agility,” according to Gartner. The research firm also named it one of the top ten strategic technology trends of 2014.

The term, Web-scale IT, is often used in the context of DevOps but it also applies to the underlying IT infrastructure – the system needs to be in a “known good state” to achieve agility at scale. And by now you probably know where we are going with this (hint: we pride ourselves to be the leaders in infrastructure automation).

Can Organizations of All Sizes Benefit From Web-Scale IT Methodology?

Good question and let’s think about this. While most organizations don’t reach the scale of a Google or the scale of a Facebook, they will still benefit from increased velocity that comes with the Web-scale IT approach (if done right). But let’s go even further, thanks to the availability of powerful open source tools like Hadoop, OpenStack, etc., Big Data and cloud techniques are no longer the privileges of hyperscale web properties and became available to enterprises of all sizes. That being said, with all these new tools and capabilities, many sub web-scale enterprises today run some form of Big Infrastructure and that brings new challenges. 

Disruption of the IT Infrastructure As We Know It

The shift to Web-scale IT represents a radical departure from the old ways of doing things in the IT world and as with every disruptive movement, it can be a scary transition. Web-scale IT requires IT professionals to be able to move faster than ever to deploy and manage Big Infrastructure. Infrastructure has become increasingly heterogeneous with commodity hardware, open source software, and home-grown provisioning and management software that make infrastructure difficult to manage at scale. Many steps are still done manually, are inefficient, and error prone.

To do Web-scale IT right, organizations must move to the next level of infrastructure automation, the level that understands the requirements of applications and responds to those requirements in real time – a software defined environment. In a software-defined environment, IT becomes simplified, as well as responsive to shifting requirements and adaptive through automation. Building and managing these systems is the “secret sauce” but it isn’t easy to achieve.

Hyperscale websites have the capital to build their own management tools that automate the management, configuration, and deployment, but that takes resources that enterprise IT infrastructure managers just don’t have. However, they still need automation if they want to achieve their goals. They need a solution that will harness the collection of commodity hardware and open source software, and make it work like a turnkey solution (from bare metal all the way to the applications layer) without the price tag of a proprietary system.

 What Would Such Solution Look Like?

Here at StackIQ we spend a lot of time thinking about how to make the lives of infrastructure managers easier. How can we help enterprises of all sizes to get to the next level of infrastructure automation and benefit from a Web-scale IT approach?

What do you think? To get the discussion started, here are a few characteristics that we believe are key to be successful in this new IT world:

-        Heterogeneous support for all commodity hardware and open source software

-        Capability to build the entire stack from bare metal to the applications layer

-        Modular extensibility of the stack to keep up with ever changing business requirements

-        Simplified deployment with script free configuration

Please chime in! We are looking forward to your input.

The StackIQ Team (@stackiq)

 

Curious to learn more about Web-scale IT? Check out Gartner’s Cameron Haight’s blog. We also highly recommend the research paper “The Long-Term Impact of Web-Scale IT Will Be Dramatic.” (note that this is a paid Gartner publication).

Read More

Topics: hadoop, big data, Cloud, cluster management, business, automation, DevOps, architecture, bare metal, big infrastucture, Web-scale IT, 2014

Get Salted at SaltConf 2014

Posted by Matthias Ankli on Jan 22, 2014 12:58:00 PM

It’s the time of the year to go to beautiful Salt Lake City, UT. Snow-covered mountains in aGreetings from Salt Lake Citygreetings from salt lake city utah resized 600 winter wonderland that offers lots of activities on and off the slopes for all you snow daredevils. But that’s not all my friends; it’s also time for the annual SaltStack conference. It’s THE global user conference for SaltStack customers, partners, developers, and community members. Here at StackIQ, we have always been great supporters of the SaltStack community and are excited to announce that Mason Katz, our CTO, will hold a talk among many other wickedly smart people. Mason’s session will focus on how to augment bare metal clusters with SaltStack. This sounds great, you may say but what are the uses cases for SaltStack combined with StackIQ’s cluster management software? This question and more will be answered during Mason’s talk but here is a little taste of what to expect

StackIQ Cluster Manager, our comprehensive software suite, automates the deployment and management of Big Data, Cloud, Linux and HPC clusters. We take care of the entire software and hardware stack from bare metal all the way to the applications layer. Recently we have expanded on our kickstart installation method to include SaltStack for two use cases:  First, we use SaltStack as a replacement of user management systems such as NIS or LDAP. Second, we use SaltStack to dynamically manage middleware configuration files (e.g., Hadoop). Unique to our use of SaltStack is the integration of our cluster configuration database with SaltStack grains and state files. This talk is focused on our specific use of SaltStack and a walk through our design decisions. We’ll introduce a novel use case where SaltStack is a consumer of configuration information in addition to a manager. Last but not least, we want to hear from the community on how to make StackIQ Cluster Manager and SaltStack work even better together.

Did we get your attention? There is no better way to get salted than at SalfConf with hands-on
labs and training, talks by your peers, SaltStack engineers and developers, big keynotes, lots of hacking and networking, and the BEST (so were we told) snow on earth.

Do you have any questions for the StackIQ crew before or during the conference? Just DM @masonkatz or @stackiq. Be there and stay warm!

The StackIQ Team

 

SaltConf 2014
January 28-30 @ Marriott City Center in Salt Lake City, UT
Augmenting Bare Metal Cluster Builds with SaltStack – Mason Katz, CTO, StackIQ (@masonkatz) 

Photo/artwork credits:
- Greetings from Salt Lake City,
Allposters

Read More

Topics: cluster, bare metal, big infrastucture, saltstack, saltconf

Performance scaling on a big data cluster with Dell and StackIQ

Posted by Matthias Ankli on Aug 29, 2013 12:51:00 PM

Firsthand experience is an important part of the decision-making process for IT professionals who are exploring cloud computing and big data solutions. In response, a large financial
institution worked in collaboration with Dell and StackIQ on a proof-of-concept that compared application performance on a big data cluster. Using our StackIQ Cluster Manager software, the team was able to rapidly configure the servers – leading to more, higher-quality tests than anticipated. 

Read the full article “Evaluating Performance Scaling on a Big Data Cluster” co-authored by our very own Tim McIntire and Greg Bruno in Dell Power Solutions magazine, 2013 issue 3. Or if you happen to be at a Dell office, pick up a print copy in the lobby.

The StackIQ Team

 

 

Read More

Topics: big data, cluster, Cloud, cluster management, case study

On-Premise Manufacturing

Posted by Greg Bruno on Jun 19, 2013 1:29:00 PM

Recently, we were in a partner meeting and the topic of "brownfield deployment" came up. Just to make sure we have our terms defined, a brownfield deployment is where a product is deployed onto an existing cluster, that is, the existing cluster already has a base OS installed and configured across all its nodes (a "ping and a prompt"). We were discussing this topic because our product excels at "greenfield deployments", that is, our product is a bare-metal installer that deploys, configures, optimizes and manages the entire software stack -- the base OS, middleware services and the application(s). In this discussion, the partner said, "but that doesn't fit into our customers’ current enterprise processes" of installing a base OS first, then handing servers off to another team for application deployment and management.

Actually, it does, if you look in the right place.

Today, there are several computer systems that don't fit into the “current enterprise process” of provisioning servers as a stand-alone process: Oracle and Teradata, to name two. These systems are constructed off-site at a manufacturing facility -- the hardware is assembled, then the entire software stack is installed, configured and optimized. Then the fully-assembled system is shipped to the enterprise customer as an appliance that is then integrated into the current set of on-premise computer systems.

As we've all read, we are at the beginning of a revolution surrounding database systems -- proprietary systems are being replaced with commodity "do it yourself" systems. We saw this same revolution in the high-performance computing space in the early 2000s. Proprietary systems created by Cray and IBM were replaced by commodity x86 clusters. This was fueled by significant hardware cost savings, but now the end user was faced with the heavy burden of deploying, configuring, optimizing and managing the entire software stack -- the job that Cray and IBM performed for their turnkey systems. That's why myself and two other parallel systems developers created the Rocks Cluster Distribution to "make clusters easy" by automating what was once a heavy burden (the Rocks Cluster Distribution is the core of StackIQ's enterprise software line).

Fast forward to today, StackIQ is making clusters easy in the enterprise. We are seeing Oracle and Teradata systems being replaced by commodity clusters. And we are seeing enterprises struggle with the heavy burden of managing cluster-aware middleware and applications, just like we saw the high-performance computing community struggle 10 years ago.

The enterprise values robust software systems, which is why Oracle and Teradata were so successful in the past. Oracle's and Teradata's manufacturing process ensured robustness and stability for every system they produced. This is still of critical importance for enterprise applications, thus StackIQ has taken cues from Oracle and Teradata (I worked at Teradata prior to co-founding the Rocks project) and built an enterprise software management product that ensures the same robustness and stability for the entire software stack that enterprise users have come to expect; we do this via on-premise manufacturing. Our software transforms on-site commodity clusters into enterprise appliances.

Once commodity clusters are managed as appliances, these systems fit into every “current enterprise process,” your racks of Oracle or Teradata systems are proof.

Greg Bruno, PhD

@itsdrbruno


Read More

Subscribe via E-mail

Follow StackIQ

Resources

White Paper - Boosting Retail Revenue and Efficiency with Big Data Analytics

    Download      

GigaOM Research report - Scaling Hadoop Clusters: The Role of Cluster Management:

    Download    

White Paper - The StackIQ Apache Hadoop Reference Architecture:

    Download