StackIQ takes a “software defined infrastructure” approach to provision and manage cluster infrastructure that sits below Big Data applications like Hadoop. In this post, we’ll discuss how this is done, followed by a step-by-step guide to installing Cloudera Manager on top of StackIQ’s management system.
The hardware used for this deployment was a small cluster: 1 node (i.e. 1 server) is used for the StackIQ Cluster Manager and 4 nodes are used as backend/data nodes. Each node has 2 disks and all nodes are connected together via 1Gb Ethernet on a private network. The StackIQ Cluster Manager node is also connected to a public network using its second NIC. StackIQ Cluster Manager has been used in similar deployments between 2 nodes and 4,000+ nodes.
Step 1: Install StackIQ Cluster Manager
The StackIQ Cluster Manager node is installed from bare metal (i.e. there is no prerequisite software and no operating system previously installed) by burning the StackIQ Cluster Core Roll ISO to DVD and booting from it (the StackIQ Cluster Core Roll can be downloaded from the Rolls section after registering). The Core Roll leads the user through a few simple forms (e.g., what is the IP address of the Cluster Manager, what is the gateway, DNS server) and then asks for a base OS DVD (for example, Red Hat Enterprise Linux 6.5; other Red Hat-like distributions such as CentOS are supported as well). The installer copies all the bits from both DVDs and automatically creates a new Red Hat distribution by blending the packages from both DVDs together.
The remainder of the Cluster Manager installation requires no further manual steps and this entire step takes between 30 to 40 minutes.
Step 2: Install the CDH Bridge Roll
StackIQ has developed software that “bridges” our core infrastructure management solution to Cloudera’s Hadoop distribution that we’ve named the CDH Bridge Roll. One feature of our management solution is that it records several parameters about each backend node (e.g., number of CPUs, networking configuration, disk partitions) in a local database. After StackIQ Cluster Manager is installed and booted, it is time to download and install the CDH Bridge Roll:
Log into the frontend as "root", download cdh-bridge ISO from here.
Then execute the following comands at the root prompt"
# rocks add roll <path_to_iso>
# rocks enable roll cdh-bridge
# rocks create distro
# rocks run roll cdh-bridge | sh
The cluster is now configured to install Cloudera packages on all nodes.
Step 3: Install Cloudera Manager and Cloudera CDH4 Roll
You can download a prepackaged Cloudera Manager here and a prepackaged Cloudera CDH4 from here.
We will now install these 2 ISOs.
rocks add roll cloudera-cdh4/cloudera-cdh4-6.5-0.x86_64.disk1.iso
rocks add roll cloudera-manager/cloudera-manager-6.5-0.x86_64.disk1.iso
rocks enable roll cloudera-cdh4
rocks enable roll cloudera-manager
rocks create distro
rocks run roll cloudera-cdh4 | sh
rocks run roll cloudera-manager | sh
Step 4: Install the backend nodes
Before we install the backend nodes (also known as compute nodes), we want to ensure that all disks in the backend nodes are optimally configured for HDFS. During an installation of a data node, our software interacts with the disk controller to optimally configure it based on the node’s intended role. For data nodes, the disk controller will be configured in “JBOD mode” with each disk configured as a RAID 0, a single partition will be placed on each data disk and a single file system will be created on that partition. For example, if a data node has one boot disk and 4 data disks, after the node installs and boots, you’ll see the following 4 file systems on the data disks: /hadoop01, /hadoop02, /hadoop03 and /hadoop04.
For more information on this feature, see our blog post Why Automation is the Secret Ingredient for Big Data Clusters.
Now we don’t want to reconfigure the controller and reformat disks on every installation, so we need to instruct the StackIQ Cluster Manager to perform this task the next time the backend nodes install. We do this by setting an attribute (“nukedisks”) with the rocks command line:
# rocks set appliance attr compute nukedisks true
# rocks set appliance attr cdh-manager nukedisks true
Now we are ready to install the backend nodes. First we put the StackIQ Cluster Manager into "discovery" mode using the CLI or GUI and all backend nodes are PXE booted. We will boot the first node as a cdh-manager appliance. The cdh-manager node will run the Cloudera Manager web admin console used to configure, monitor and manager CDH.
After installing it shows up as below:
We will install all the other nodes in the cluster as compute nodes. StackIQ Cluster Manager discovers and installs each backend node in parallel (10 to 20 minutes) - no manual steps are required.
For more information on installing and using the StackIQ Cluster Manager (a.k.a., Rocks+), please visit StackIQ Support or watch the the demo video.
After all the nodes in the cluster are up and running you will be ready to install Cloudera Manager. In this example, the StackIQ Cluster Manager node was named “frontend” and the compute nodes were assigned default names of compute-0-0, compute-0-1, compute-0-2 (3 nodes in Rack 0), and compute-1-0 (1 node in Rack 1).
Step 5: Install Cloudera Manager
To install Cloudera Manager on the frontend, as root, execute:
# /opt/rocks/sbin/cloudera-manager-installer.bin --skip_repo_package=1
This will install Cloudera Manager with packages from our local yum repository as opposed to fetching packages over the internet.
Step 6: Select What to Install
Log into the cdh-manager node http://<cdh-manager>:7180 (where ‘<cdh-manager>’ is the FQDN of your StackIQ Cluster Manager) with username admin and password admin
Choose Cloudera Enterprise trial if you want to do a trial run
The GUI will now prompt you to restart Cloudera Manager server. Run the following command on cdh-manager node.
# service cloudera-scm-server restart
After restarting the server, you will be asked to login again. Click Continue in the screen below.
Specify list of hosts for CDH installation e.g., compute-0-[0-3],cdh-manager-0-0
After all the hosts are identified, hit Continue
Choose Use Packages and select CDH4 as the version in the screen below.
Specify custom repository as the CDH release you want to install. Specify http://<frontend>/install/distributions/rocks-dist/x86_64/ for the URL of the repository where <frontend> is the IP address of the cluster’s frontend.
In the example above, 10.1.1.1 was the IP address of the private eth0 interface on the frontend.
Choose All hosts accept same private key as the authentication method. Use Browse to upload the private key present in /root/.ssh/id_rsa on StackIQ Cluster Manager.
You will then see a screen where the progress of the installation will be indicated. After installation completes successfully, hit Continue.
You will then be directed to the following screen where all hosts will be inspected for correctness.
Choose a combination of services you want to install and hit Continue
Review that all services were successfully installed.
Finally your Hadoop services will be started.
Step 7: Run a Hadoop sample program
It is never enough to set up a cluster and the applications users need and then let them have at it. There are generally nasty surprises for both parties when this happens. A validation check is a requirement to make sure everything is working the way it is expected to.
Do this to check to test if the cluster is functional:
Log into the the frontend as “root” via SSH or Putty.
On the command line, run the following map-reduce program as the “hdfs” user, which runs a simulation to estimate the value of pi based on sampling:
# sudo -u hdfs hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi 10 10000
Output should look something like this.
Congratulations, you are done!
We’re certain you’ll find this the quickest way to deploy a cluster capable of running Cloudera Hadoop. Give it a shot and send us your questions!
The StackIQ Team
With a constantly increasing amount of data and more complex application requirements, talk about so-called Web-scale IT architecture is on the rise. But what exactly does Web-scale IT mean?
The research firm Gartner introduced the term Web-scale IT in an effort to describe what the fine folks at Internet giants like Facebook, Google, LinkedIn, etc. have achieved in agility and scalability by applying new processes, architectures, and practices. These companies exceed the “scale in terms of sheer size to also include scale as it pertains to speed and agility,” according to Gartner. The research firm also named it one of the top ten strategic technology trends of 2014.
The term, Web-scale IT, is often used in the context of DevOps but it also applies to the underlying IT infrastructure – the system needs to be in a “known good state” to achieve agility at scale. And by now you probably know where we are going with this (hint: we pride ourselves to be the leaders in infrastructure automation).
Can Organizations of All Sizes Benefit From Web-Scale IT Methodology?
Good question and let’s think about this. While most organizations don’t reach the scale of a Google or the scale of a Facebook, they will still benefit from increased velocity that comes with the Web-scale IT approach (if done right). But let’s go even further, thanks to the availability of powerful open source tools like Hadoop, OpenStack, etc., Big Data and cloud techniques are no longer the privileges of hyperscale web properties and became available to enterprises of all sizes. That being said, with all these new tools and capabilities, many sub web-scale enterprises today run some form of Big Infrastructure and that brings new challenges.
Disruption of the IT Infrastructure As We Know It
The shift to Web-scale IT represents a radical departure from the old ways of doing things in the IT world and as with every disruptive movement, it can be a scary transition. Web-scale IT requires IT professionals to be able to move faster than ever to deploy and manage Big Infrastructure. Infrastructure has become increasingly heterogeneous with commodity hardware, open source software, and home-grown provisioning and management software that make infrastructure difficult to manage at scale. Many steps are still done manually, are inefficient, and error prone.
To do Web-scale IT right, organizations must move to the next level of infrastructure automation, the level that understands the requirements of applications and responds to those requirements in real time – a software defined environment. In a software-defined environment, IT becomes simplified, as well as responsive to shifting requirements and adaptive through automation. Building and managing these systems is the “secret sauce” but it isn’t easy to achieve.
Hyperscale websites have the capital to build their own management tools that automate the management, configuration, and deployment, but that takes resources that enterprise IT infrastructure managers just don’t have. However, they still need automation if they want to achieve their goals. They need a solution that will harness the collection of commodity hardware and open source software, and make it work like a turnkey solution (from bare metal all the way to the applications layer) without the price tag of a proprietary system.
What Would Such Solution Look Like?
Here at StackIQ we spend a lot of time thinking about how to make the lives of infrastructure managers easier. How can we help enterprises of all sizes to get to the next level of infrastructure automation and benefit from a Web-scale IT approach?
What do you think? To get the discussion started, here are a few characteristics that we believe are key to be successful in this new IT world:
- Heterogeneous support for all commodity hardware and open source software
- Capability to build the entire stack from bare metal to the applications layer
- Modular extensibility of the stack to keep up with ever changing business requirements
- Simplified deployment with script free configuration
Please chime in! We are looking forward to your input.
The StackIQ Team (@stackiq)
Curious to learn more about Web-scale IT? Check out Gartner’s Cameron Haight’s blog. We also highly recommend the research paper “The Long-Term Impact of Web-Scale IT Will Be Dramatic.” (note that this is a paid Gartner publication).
It’s the time of the year to go to beautiful Salt Lake City, UT. Snow-covered mountains in a winter wonderland that offers lots of activities on and off the slopes for all you snow daredevils. But that’s not all my friends; it’s also time for the annual SaltStack conference. It’s THE global user conference for SaltStack customers, partners, developers, and community members. Here at StackIQ, we have always been great supporters of the SaltStack community and are excited to announce that Mason Katz, our CTO, will hold a talk among many other wickedly smart people. Mason’s session will focus on how to augment bare metal clusters with SaltStack. This sounds great, you may say but what are the uses cases for SaltStack combined with StackIQ’s cluster management software? This question and more will be answered during Mason’s talk but here is a little taste of what to expect
StackIQ Cluster Manager, our comprehensive software suite, automates the deployment and management of Big Data, Cloud, Linux and HPC clusters. We take care of the entire software and hardware stack from bare metal all the way to the applications layer. Recently we have expanded on our kickstart installation method to include SaltStack for two use cases: First, we use SaltStack as a replacement of user management systems such as NIS or LDAP. Second, we use SaltStack to dynamically manage middleware configuration files (e.g., Hadoop). Unique to our use of SaltStack is the integration of our cluster configuration database with SaltStack grains and state files. This talk is focused on our specific use of SaltStack and a walk through our design decisions. We’ll introduce a novel use case where SaltStack is a consumer of configuration information in addition to a manager. Last but not least, we want to hear from the community on how to make StackIQ Cluster Manager and SaltStack work even better together.
Did we get your attention? There is no better way to get salted than at SalfConf with hands-on
labs and training, talks by your peers, SaltStack engineers and developers, big keynotes, lots of hacking and networking, and the BEST (so were we told) snow on earth.
Do you have any questions for the StackIQ crew before or during the conference? Just DM @masonkatz or @stackiq. Be there and stay warm!
The StackIQ Team
January 28-30 @ Marriott City Center in Salt Lake City, UT
Augmenting Bare Metal Cluster Builds with SaltStack – Mason Katz, CTO, StackIQ (@masonkatz)
- Greetings from Salt Lake City, Allposters
Firsthand experience is an important part of the decision-making process for IT professionals who are exploring cloud computing and big data solutions. In response, a large financial
institution worked in collaboration with Dell and StackIQ on a proof-of-concept that compared application performance on a big data cluster. Using our StackIQ Cluster Manager software, the team was able to rapidly configure the servers – leading to more, higher-quality tests than anticipated.
Read the full article “Evaluating Performance Scaling on a Big Data Cluster” co-authored by our very own Tim McIntire and Greg Bruno in Dell Power Solutions magazine, 2013 issue 3. Or if you happen to be at a Dell office, pick up a print copy in the lobby.
The StackIQ Team
Recently, we were in a partner meeting and the topic of "brownfield deployment" came up. Just to make sure we have our terms defined, a brownfield deployment is where a product is deployed onto an existing cluster, that is, the existing cluster already has a base OS installed and configured across all its nodes (a "ping and a prompt"). We were discussing this topic because our product excels at "greenfield deployments", that is, our product is a bare-metal installer that deploys, configures, optimizes and manages the entire software stack -- the base OS, middleware services and the application(s). In this discussion, the partner said, "but that doesn't fit into our customers’ current enterprise processes" of installing a base OS first, then handing servers off to another team for application deployment and management.
Actually, it does, if you look in the right place.
Today, there are several computer systems that don't fit into the “current enterprise process” of provisioning servers as a stand-alone process: Oracle and Teradata, to name two. These systems are constructed off-site at a manufacturing facility -- the hardware is assembled, then the entire software stack is installed, configured and optimized. Then the fully-assembled system is shipped to the enterprise customer as an appliance that is then integrated into the current set of on-premise computer systems.
As we've all read, we are at the beginning of a revolution surrounding database systems -- proprietary systems are being replaced with commodity "do it yourself" systems. We saw this same revolution in the high-performance computing space in the early 2000s. Proprietary systems created by Cray and IBM were replaced by commodity x86 clusters. This was fueled by significant hardware cost savings, but now the end user was faced with the heavy burden of deploying, configuring, optimizing and managing the entire software stack -- the job that Cray and IBM performed for their turnkey systems. That's why myself and two other parallel systems developers created the Rocks Cluster Distribution to "make clusters easy" by automating what was once a heavy burden (the Rocks Cluster Distribution is the core of StackIQ's enterprise software line).
Fast forward to today, StackIQ is making clusters easy in the enterprise. We are seeing Oracle and Teradata systems being replaced by commodity clusters. And we are seeing enterprises struggle with the heavy burden of managing cluster-aware middleware and applications, just like we saw the high-performance computing community struggle 10 years ago.
The enterprise values robust software systems, which is why Oracle and Teradata were so successful in the past. Oracle's and Teradata's manufacturing process ensured robustness and stability for every system they produced. This is still of critical importance for enterprise applications, thus StackIQ has taken cues from Oracle and Teradata (I worked at Teradata prior to co-founding the Rocks project) and built an enterprise software management product that ensures the same robustness and stability for the entire software stack that enterprise users have come to expect; we do this via on-premise manufacturing. Our software transforms on-site commodity clusters into enterprise appliances.
Once commodity clusters are managed as appliances, these systems fit into every “current enterprise process,” your racks of Oracle or Teradata systems are proof.
Greg Bruno, PhD
The word “automatic” has been around since the 1500’s, but really came to the fore in 1939. That’s when the New York World’s Fair sparked everyone’s imagination with visions of technology that promised to solve all of our problems through automation. Recently, while working with one of our customers, I was reminded how automation can still surprise people. Let me tell you what I mean.
A large credit card company recently asked us to participate in a “proof-of-concept” for their big data project. As a startup, we are always thrilled when one of the big boys wants to try out our wares, so we jumped at the opportunity.
When we arrived on site in their data center, they assigned a half-dozen machines for us to use. One would become the StackIQ Cluster Manager, and the other 5 would become cluster nodes running Hadoop. We are used to building clusters of all sizes using our software, and knew that a small, straightforward installation like this one would be a cake walk. We set about our task.
We set up a few parameters for the cluster, and launched the StackIQ Cluster Manager. It was soon up and running without a hitch, as expected.
Next, we used the Cluster Manager to install the cluster machines. Twenty minutes later, all 5 backend machines are up and running Hadoop services. Smooth. No problem. Expected.
It’s A Trap!
That’s when my colleague and I noticed that the customer’s IT people are whispering to each other, and we started to wonder if we’d done something wrong. We checked our screens, and found that cluster was indeed up and running — ready to accept Map/Reduce jobs.
So we took a deep breath and walked over to the gathered whisperers and asked if there was a problem. One of them asked in a hushed voice, “Um, how’d you guys do that?”
“Do what?” we answered.
“Bring up that one machine?” he said, pointing at one of the cluster servers.
After we explained that we hadn’t done anything special, we just let our Cluster Manager do its thing, the customer confessed, "We’ve been struggling to configure that machine for over 2 weeks now and haven’t been able to get it to install. There seemed to be something wrong with the configuration of the disk controller, but we haven’t been able to fix it.”
That’s the power of true automation. That’s what we designed our software to do. That’s what makes us very proud of the software we build. It takes the headaches out of setting up clustered infrastructure of any size by automating nearly everything — including configuring those pesky disk controllers.
What was a major problem for our customer — one they hadn’t been able to solve in weeks — wasn’t even a bump in the road for our cluster manager. It found the controller, configured it, and moved on to its next task. Smooth. No problem. Expected.
It can take as many as 80 manual steps to correctly configure a disk controller for use in a Big Data cluster, and clusters have a lot of disks — and controllers. We knew that we had to automate the configuration of all those disks to help cluster operators build their clusters efficiently. Automating the procedure dramatically reduces the time it takes to put a cluster into production.
Here’s how we do it. On first installation of a server, our software interacts with the disk controller to optimally configure it based on the node’s intended role. For example, if the machine is a data node, the disk controller will be configured in “JBOD mode” with each disk configured as a RAID 0. However, if the machine is going to be a Cassandra data node, the data disks will be automatically configured as a RAID 10. This all happens automatically — no manual steps — ensuring that all cluster nodes are optimally configured from the start.
The goal is a smooth configuration process. It’s just a bonus when we get to surprise and delight a customer who sees their cluster up and running after struggling for weeks on their own trying to solve a stubborn configuration problem.
Smooth. No problem. Expected.
Have you ever gotten so immersed in a topic that you forgot that others might not be? For instance, you may have lapsed into jargon from your workplace while at a party and been met with that look that says, “Umm, I think I’ll go refresh my cocktail now.” I know I have. It turns out people have better things to do with their time than study whatever particular topic you think about all day long.
The same thing can happen when your company communicates with people. I don’t know what business you’re in, but I’ll bet the way you and your colleagues talk about it would baffle the uninitiated. I was recently reminded of the problems insider-speak can create as we were gearing up to start a new proof-of-concept project with a prospective customer.
Here’s what happened.
At StackIQ, we make software that builds clusters for big data from bare metal. By “bare metal” we mean machines that have no software on them at all. We use that term in our presentations, sales pitches, web site, and marketing collateral.
The reason our software provisions systems from bare metal stems from the philosophy our founders developed during their years building and maintaining clusters. They discovered that if you allow operators to apply patches and change configuration settings incrementally to various machines in the cluster, you eventually wind up with a system in an unknown state. That makes it very difficult to troubleshoot problems. Which machine is running which version of the OS? Which ones are at the current patch level? Which have yet to be updated? Were all the change logs updated — every time? Who knows?
The only way to know for sure what is running on all of the machines in your cluster, is to install each of them from scratch (aka bare metal) using a known-good source. So we developed a system that does just that, and does it fast.
OK, back to our confused customer. We had given them our sales pitch, and they agreed to try out our software in their labs. When it came time to allocate some servers to the test, they asked us which operating system we wanted them to install. We explained (again) that it didn’t matter, since our software would install everything “from bare metal.” To which they responded, “Oh, OK. So we’ll leave the cluster nodes empty, and just install Linux on the management node.” “No need,” we explained,“ we will install the entire cluster from bare metal — including the management node. There’s no need for you to install any software at all.”
Anyway, we got it all straightened out, and the customer gave us a set of bare machines to run our tests on.
Why was our customer confused? It wasn’t their fault. What we do is decidedly different from what others in our space do. Our competitors require that an OS and other software be in place before they begin their installation. They don’t operate from our “clean slate” philosophy. What’s more, the term “bare metal” is often used to mean something different in the IT community. For example in the cloud computing space, “bare metal” is used to describe a software stack that is running directly on the hardware, and not in a virtual machine. Even wikipedia redirects a search for bare metal to an article on “bare machine.”
I took this incident as a reminder that we should never assume what others know. Everyone’s experience is different, and that experience gives them a unique perspective. So whether you’re a marketing professional, a sales professional, or a technologist, it’s always a good idea to check that people have understood your message, and adjust your language to make yourself clear.
Hmmmm, maybe I should go run a find/replace operation on our product information to replace “bare metal” with “bare machine”…
photo credit: JD Hancock via photopin cc
GigaOM caught up with StackIQ executive, Tom Melzl, during the Structure Data conference to get an update on the company. In the interview, Tom explains why cluster management is crucial to any successful big data project, and what differentiates StackIQ from its competitors. He also gives us a peek at the technology areas the company is focused on as they develop innovations for the future.
GigaOM talks to StackIQ's Tom Melzl (3:50)
Have you heard this story? A couple of MBA students were scoping out the local 24-hour convenience store and noticed an end cap that featured an odd pairing of products: diapers and beer. Huh? Turns out that someone crunched their customer behavior data deeply enough to figure out that when a bleary-eyed new father stumbles into the store late at night, diapers or beer were probably what he was after. By displaying these prominently on the front end of the aisles, the store was able to make the late-night shopper’s quarry easy to find.
If beer goes with diapers the way cookies go with milk, imagine what insights big data could bring to your business. Retail is right in the sweet spot to benefit most from big data projects. Some large retail organizations generate terabytes of data every minute. Inventory systems, loyalty cards, and sales transactions reveals exactly what was sold, when it was sold and what other items were rung up in the same purchase.
So Much Data, So Many Ways to Use it
What’s happening to that data now? Much of it gets stored, and later used for financial analyses of various sorts. Increasingly, other departments are starting to dip into the data for their own purposes.
Human Resources departments are using big data to determine how many sales associates and other personnel to have on hand, and when. Hiring and staffing patterns will become more precise, contributing to the bottom line.
Buyers are leveraging the data because with suppliers. The result? Fewer returns, fewer overstocks, fewer costly mistakes like all those leftover candy canes in the back of the shop months after the holiday season has come and gone.
Shelf placement is usually done by suppliers, but retailers can use the results of their big data analysis to help optimize that placement. Maybe those oversized boxes of laundry detergent ought to be on the middle shelf instead of the bottom. Better data means better sales and both buyers and suppliers will like that.
Big data tools can let help your marketing staff do better, faster research. In one store, they’ll put batteries on the end cap closest to the door. At another store that’s where the bathroom tissue goes. Who’s right? Who’s wrong? What about the beer and diapers? With the right data, you can take the guesswork out of it. And while we’re at it, take a look at which coupons are working best and which ones never move anything? The possibilities for tweaking are nearly endless.
So, What Do You Need to Make it Happen?
Most retailers are choosing open source Apache Hadoop software running on low cost, commodity hardware for their big data projects.
Setting up and operating a big data cluster can be an intimidating proposition for IT departments used to working with more traditional enterprise data center resources such as email, web, and database servers. Big data clusters are different animals. Fortunately, the market has responded by providing good deployment and management tools. With the right tools, any IT department can deploy and manage big data clusters with confidence, even if they’re never done it before.
Another benefit of working with a good vendor is that they are experts in the art of cluster management. You can draw on their years of experience, building and running clusters of all sizes. Chances are pretty good they’ve already seen and solved any problem you run across.
So, are you ready to take the big data plunge? Start out on the right foot, and pretty soon you’ll move from being a big data beginner to petabyte-crunching pioneer.
photo credit: x-ray delta one
via photopin cc
Last week, Pat Gelsinger, CEO at VMware opened a can of worms with his comments at his company's partner confab in Las Vegas. Gelsinger is clearly concerned about enterprise computing workloads migrating to Amazon’s public cloud (AWS). Further, he states that those lost workloads are gone forever - "a workload goes to Amazon, you lose, and we have lost forever" and that "we want to own corporate workload." Gelsinger's comments gave rise to several posts, tweets and articles in the IT blogosphere, but what I found more interesting was the statement from VMware President and Chief Operating Officer Carl Eschenbach, "I look at this audience, and I look at VMware and the brand reputation we have in the enterprise, and I find it really hard to believe that we cannot collectively beat a company that sells books." Well Carl, you should be concerned because that measly bookseller is creating competitive advantage in IT faster than VMware and most every other IT vendor; and your predicament is exacerbated by the ossification of enterprise IT organizations which cannot adequately react to the needs of the business.
Amazon became the dominant bookseller by driving its costs down rapidly while providing a very convenient, automated book buying experience. Guess what? At AWS they're doing the same thing for computing - making it cheap and easy to consume. The fanatical AWS team is singularly focused on delivering needed solutions at the lowest possible cost that can be easily provisioned and managed by the user. Does this sound like the way IT vendors and enterprise IT organizations create and deliver new solutions that support the needs of their business users? Hardly. Vendors instead behave according to corporate edict, selling products and pushing services that don't create the best solution for the customer, while the enterprise IT organization remains comfortable in its cocoon of processes and standards. Is it any wonder that workloads migrate to AWS with or without IT approval?
So, will Amazon and its ilk win the enterprise workload war? No doubt that some percentage of corporate computing is appropriate for the public cloud and the mix will be determined over time by competitive markets - public cloud and enterprise IT are both viable. However, down the road, should enterprise IT be concerned that public clouds will completely dominate computing with traditional solutions shrinking into oblivion leaving CIOs with no more to do than cost accounting? They probably feel safe for now, but it's also clear that IT vendors and IT departments need to take heed of the cost, responsiveness (read automation), and maniacal focus at AWS lest that steamroller flattens them. Gelsinger and Eschenbach are half right -- it’s not time to throw out the enterprise data center, but it is time to throw out the traditional enterprise IT playbook. StackIQ can help.
Joe Markee, StackIQ, CEO