Hello Friends, This is Nilesh Joshi from Pune, India. By profession I am an UNIX Systems Administrator and have proven career track on UNIX Systems Administration.
This blog is written from both my research and my experience. The methods I describe herein are those that I have used and that have worked for me. It is highly recommended that you do further research on this subject. If you choose to use this document as a guide, you do so at your own risk. I wish you great success.
Sunday, July 3, 2011
Cloud computing – Revisited
Well, before I write something about IBM Cloud Computing platform (IBM Integrated Service Delivery Manager) I would like to explain few of my understandings about Cloud Computing & its implementation factors.
I’m sure almost everyone knows about cloud models/types which are namely public cloud, private cloud, hybrid cloud & community cloud also these all clouds have specific service delivery models in particular, IT-as-a-Service, Platform-as-a-service & Software-as-a-Service.
Today, I would like to write an article on implementation factor(s) of cloud computing. Since past 8-9 months have been working on various cloud technology platforms/products and quite a few POC environments to test out & make out which product best suits to an organization specific requirements. There is no ready made cloud product out there which will able meet all of your organization requirements hence you need to take a call whether customizing the solution or altering the organization policies, structure i.e. whatever requirement. Trust me testing the technology is an easy job however to fix on what is the best fit product for an organization is a “not easy” job as this is something which is beyond just a technology & cloud implementers, decision makers and technical people should look at it from business viewpoint too anyways this is not a topic of discussion here.
Let’s start this discussion with traditional IT system management portfolio & service delivery model. In ideal scenario a particular software development project gets a requirement to develop a product and resources required to deliver that product, the project manager or project lead starts identifying the resource requirements and ask IT department to procure necessary systems, OS, software, databases, toolkits or provision necessary systems, OS, software, databases via available IT infrastructures which takes significant amount of time since person who driving the project has to get many things approved by higher management, software compliance team, information security team etc then procurement start which takes several weeks or he raise a request to IT department to provision systems with necessary OS, software, databases which also takes several days and so on. Then only actual application, product development starts, tested & deployed on that infrastructure. We can see several challenges in this model as described below –
Need large Capex - Large investments required to procure the infrastructure required for any product development.
Poor utilization of resources - Application usage is not going to be constant yet the infrastructure is provisioned for peak demand hence the infrastructure remains under-utilized for a major part of the time.
Slow Time-to-Market - This model of procuring and provisioning infrastructure usually requires significant time and reduces the agility of an organization in creating new business solutions.
Now we can clearly see how traditional IT system management portfolio & service delivery model prove expensive in current competitive, difficult economic market.
Also on the other hand, still nothing is accountable as such. For system administrators it’s becomes a challenge to showcase or prove that a particular system or systems are under utilized which is dedicated to a specific functional group and those can be shared among the other groups. In such cases cost grows, complexities increases as number of hardware resources and software resources increases, data center space, cooling cost increases, requires more man power to handle the infrastructure, lack of standardization due to silos & much more to talk about.
Another point of view, Over 70% of IT budgets in a typical Data Centre goes just to "keeping the lights on", especially keeping the “green” lights on! Conversely, only a small portion of each dollar spent on IT today creates a direct business benefit. Since Data Center IT assets become obsolete approximately every 5 years, the vast majority of IT investment is spent on upgrading various pieces of infrastructure and providing redundancy and recover-ability: activities that consume approximately 60 to 80% of IT expenditures without necessarily providing optimal business value or innovation.
To overcome on above challenges to a certain level virtualization & consolidation has played a major role however IT has started seen many challenges with virtualization environment like VM sprawl, costing etc consequently to overcome on those challenges & challenges of overall service delivery methodology Cloud Computing technology evolved.
Cloud Computing is a model of service delivery and access where dynamically scalable and virtualized resources are provided as a service over the Internet.
Cloud Computing offers an alternative approach that profoundly transforms the way in which information and services are consumed and provided and can enable businesses to:
Lower costs by using energy and resources more efficiently
Enhance agility, growth, and profitability
Simplify operations and management
Ensure elastic and trusted collaboration between various groups which results into visibility & standardized, smooth operations.
Faster time to market
On-demand elastic, dynamic infrastructure & lot more.
Cloud computing addresses many of the challenges of IT silos: inefficiencies, high costs, and ongoing support and maintenance concerns, as well as increasing user demand for services.
Evolution towards Cloud
Both "private" and "public" cloud computing is based on qualities such as self-service, pay-as-you-go charge-back, on-demand provisioning, and the appearance of unbounded scalability.
Public cloud has its own benefits and challenges; everybody knows the benefits so we will see what all challenges of public cloud –
Public clouds like Amazon AWS, Microsoft Azure, Google AppEngine offer infrastructure and platforms as services over the internet. In public clouds, resources and costs are shared by users who use them over the internet on pay per use model.
This model appeals especially to startups and small organizations that have not invested in hardware resources and are looking for ways to avoid the large capex involved in procuring infrastructure upfront. Even though there are several benefits like cost savings, faster time to market, etc., from this model, there are a few challenges listed below that are preventing wide scale adoption of public clouds.
Security - The biggest blockade is the possible security issues due to multi-tenant nature of public clouds. There is security and privacy concerns with sharing same physical hardware with unknown parties that need to addressed.
Control over IT – No direct control of IT infrastructure hence putting up mission critical, data sensitive applications is a potential risk.
Leveraging Existing Investment - Most large organizations that have already invested in their own data centers would see a need to leverage those investments as an important criterion in adopting cloud computing.
Corporate Governance and Auditing: Performing governance and auditing activities with the corporate data abstracted in the public cloud poses challenges, which are yet to be addressed. The limitation or law for storing data across different national boundaries etc.
Outage – There are several outages of public cloud vendor and that creates a chaos to really want to adapt to public cloud for their mission critical applications.
Incident history -
Microsoft Azure: malfunction - 22 h outage on March 13/14, 2008 in Windows Azure,
S3 outage: authentication service overload - 2 h outage on Feb 15, 2008 leading to unavailability,
S3 outage: Single bit error leading to 6–8 h outage on July 20, 2008 gossip protocol blowup,
FlexiScale: core network failure 18 h outage on Oct 31, 2008
The most up-to-date AWS outage around 48 h.
Well, the most important positive about private cloud offer a critical additional benefit: TRUST. This ability to offer elastic computing without sacrificing security or control that is driving many businesses to moving to this delivery model for IT services.
Initially, most private clouds will be made up almost entirely of internal resources. A private cloud can combine both external and internal cloud resources to meet the needs of an application system, and that combination, which is totally under enterprise control using unified management, can change moment by moment. This is also called as Hybrid cloud. With a private cloud, enterprises can run processes internally and externally, having established the private cloud as the control point for workloads. With control through a unified management tool and a user-centric view, the private cloud thus enables IT to make the best decisions about whether to use internal or external resources, or both, and allows that decision to be made on a real-time basis to meet user service needs.
The movement toward cloud computing began for the enterprises with data center virtualization and consolidation of server, storage, and network resources to reduce redundancy and wasted space and equipment with measured planning of both architecture (including facilities allocation and design) and process.
Below are major three stages to cloudify your organization. (There are several stages in between like requirement gathering, building service offerings & service catalogs upon gathered information, security, software compliance, information security compliance, defining billing model/charge back system etc)
Stage 1: Consolidation and Virtualization
Consolidation is a critical application of virtualization, enabling IT departments to regain control of distributed resources by creating shared pools of standardized resources that can be rationalized and centrally managed. Many IT departments already are consolidating under-utilized computing resources by running multiple applications on a single physical server with virtualization technology from IBM AIX, Sun-Oracle Zones/Container & LDOMs, and VMware, Linux KVM etc.
Stage 2: Automation and Optimized Virtualization
In this stage, virtualization optimizes IT resources and increases IT agility, thus speeding time-to market for services. Through automation, Data Centers systematically remove manual labor requirements for the run-time operation of the data center. To create a cloud service, self-service and metering (feedback about the cost of the resources allocated) are offered in addition to automation.
Stage 3: Federation
Linking disparate cloud computing infrastructures with one another by connecting their individual management infrastructures allows disparate cloud IT resources and capabilities-capacity, monitoring, and management-to be shared, much like power from a power grid. It also enables unified metering and billing, one-stop self-service provisioning, and the movement of application loads between clouds, since federation can occur across data center and organization boundaries, with cloud internetworking. Cloud internetworking is the network technology enabling the linkage of disparate cloud systems in a way that accommodates the unique nature of cloud computing and the running of IT workload.
Hence after stage 3 our private cloud should look like as follows.
There is much more to discuss, write on this topic however for this blog entry I’ve explained a very high level overview on implementing private cloud thoughts, I hope this helps.