No wait, I mean Virtualization … , I mean Grid … well no, I should say Utility … Oh, well, forget it … just find a way to save some money!
If you have been in the information technology (IT) sector for any length of time, you are probably accustomed to two IT industry absolutes: (1) never-ending hype from the hardware/software vendors and service providers; and (2) the solutions you build are generally too expensive and take too long to implement. With this in mind, I have yet to make up my own mind on “cloud computing”. However, I have been a big fan of Application Service Provider (ASP) and Business Process Outsourcing (BPO) services, which I believe is the genesis of cloud computing (CC).
The transition to CC seems to have happened this way. Infrastructure components became a commodity and much more reliable. Applications became reusable components and extremely portable. Combine the infrastructure (including fast internet) with application portability, and poof, you have the ingredients to spawn a new market need. It was this new market opportunity that allowed companies to begin outsourcing accounting or other back office processes to a BPO servicer. However, the BPOs needed a way into the client’s network, which is where the line between ASPs and BPOs started to blur. With this fuzzy boundary, IT needed a new term, so poof, Grid computing. Not the Grid computing you run on the PS3 or in a Screen Saver, but the type of Grid computing that includes running a prepackaged type of application offsite or in-house on a seemingly abstract layer of infrastructure resources.
Next you take Grid computing and wrap it in a nice financial model, and poof, you have Utility computing. This starts with the Grid and allows the client to run a set of applications, generally offsite, for a monthly or per usage fee. Utility computing can be as simple as having your corporate internet presence on a shared web hosting provider, or using offsite Exchange management, or offering just about any application that can be run in a browser as a “plug in the wall” type of utility. The Utility computing applications and pricing models seek to appear similar to the services and billing models of the phone, electric, and gas companies. However, have you tried reading a cell phone or utility bill lately? Why is the government charging me to use my phone? Why are you telling me how many therms I used for the first 6.7 days compared to the remaining 23.3 days. Of course, that is except in February and the bill only includes 28 days, but cost more than January when it was colder? Excuse my digression, but such utility practices will be relevant when researching the pricing model for the Elastic Compute Cloud being offered by Amazon as EC2.
For those of us that need bread crumbs, we are here: | Commoditized Infrastructure | Reliable Internet and Infrastructure | Portable Apps | ASPs | BPOs | Grid | Utility | and finally (queue the bright light and church choir), Cloud Computing!
Saving Money In the Cloud
Cloud computing combines everything just discussed but with the promise of technology being faster and cheaper. Notice I did not include “better”! Looking at the economy and the purchasing trends of my clients over the last two years and the forecasts for 2009, some seem to be compromising quality for cheaper and faster (faster to market). Now I do not necessarily believe lowering quality is a bad thing. Just ask a start-up company running on a shoe string budget, or a company sponsoring an R&D activity that is not directly aligned with corporate initiatives. Moving from 99.0% availability to 99.9% availability can promote a perception of higher quality. However, that extra 0.9% percent can also translate to double, triple or even more in infrastructure and labor costs.
Where is the sweet spot for cloud computing? It has the presentation layer, which is very similar to a platform and the “infrastructure” layer is really in the Internet “cloud”. I like to think of the “cloud” as an abstract view of the infrastructure because the complexities and many of the decisions are hidden from the client and decided by the cloud provider(s). To make the cloud a reality, it needs to be faster and cheaper than on-premise resources, and needs the ability to add quality back into the equation.
“Board members, I would like to outsource our IT processing to Amazon.com!”
Why? Are you buying a book? Well, no, I am going to leverage the Amazon.com Capacity On Demand model and buy storage. A book on storage? Um, well no, they sell on-line storage and CPU capacity. Oh, how much is that? Well, I don’t really know, they charge a penny for certain types of memory, with a certain level of service, and it depends on CPU cycles at the SLA of those cycles. So, how much? Well, they have a calculator that will tell us, but we will not really know until we receive the bill. Oh, like the phone company or electric company? But there’s a calculator to estimate with …
Amazon’s Elastic Compute Cloud (EC2)
Amazon has deployed their Amazon Web Services (AWS) along with their Elastic Compute Cloud (EC2) service. (Link: aws.amazon.com/ec2). I have seen many developers jump on board to use and deploy EC2 and associated AWS products. However, my corporate clients are not knocking down the door to buy AWS. Both EC2 and AWS provide an abstracted layer of infrastructure, along with the set of APIs and platforms needed to utilized the cloud. I am impressed that AWS has deployed their products using a service level agreement (SLA) model. This allows their customers to pay for 99.9% availability and a fully replicated environment, or to contract for a non-replicated one with lesser availability (which is great for R&D or development). I am less impressed with their cost model and more narrow approach to what is offered from a platform view. Did I mention they have a calculator to estimate? AWS, along with the EC2 product line, includes a flexible and dynamically adjustable environment, the ability to scale up or down, the use of Amazon Machine Images (AMIs), a pay-for-use model, and many corporate type of offerings like JBoss.
Salesforce.Com and Force.com
The work and development activities associated with SalesForce.com (SF) are very interesting. (Link: www.salesforce.com). SF has successfully deployed their technology and promoted their product as “platform as a service”. The Force.com portion of SF is actually an open directory of applications that fit into SF and the CRM models of SF. (Link: force.com). The abstracted infrastructure portion of SF does not seem to be as open as EC2; however, SF together with Force.com seems to make business sense. I’m more excited about SF than EC2 because I can develop using the SF platform and APIs, and then deploy them to clients using a subscription based SF front end. Some of the SF applications seem to be little more than a link-and-launch feature. However, integration with CRM and “single pane of glass” visibility into IT infrastructure performance and availability for running a business has considerable potential.
We did not discuss virtualization and that is because virtualization is not new and is more evolutionary than revolutionary – and now is considered just a part of the computing environment. Don’t get me wrong, virtualization is important as an enabler for cloud computing but by itself has little value. As for cloud computing, the idea may not be totally revolutionary either, however, the way companies are implementing Cloud as a platform and a way to lower cost, retain quality and improve time to market may turn into revolutionary opportunities.