Click here to close now.




















Welcome!

@DevOpsSummit Authors: Pat Romanski, Mike Kavis, Tim Hinds, XebiaLabs Blog, Elizabeth White

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, Agile Computing, Cloud Security

@CloudExpo: Article

Goldie Locks and the Three Clouds: The Rise of the Enterprise Cloud

A cloud needs to be more than an infrastructure dispenser – providing small/medium/large chunks of infrastructure for each user

We all know the story of Goldilocks and the three bears, but have you heard the one about Goldie Locks and the three clouds? This tale is playing out throughout the IT marketplace.

Goldie Locks - an IT executive for a state government - has once again found herself in a dilemma. "If only I could choose one of the three options," she sighs. Goldie's dilemma is a result of competing requirements within her enterprise. Regarding infrastructure costs, Goldie has been told to "do more with less."

"If someone says that one more time, they're going to have porridge thrown at them," she huffs. Goldie knows that standardizing infrastructure requirements to serve the business and its processes securely, reliably and quickly is a proven way to reduce capital and operational costs. On the other hand, various business units and their departments have specific requirements for their mission-critical applications. They are resisting giving up control.

While many of her colleagues suggest she use the public cloud, Goldie believes that the security implications would be a deterrent to acceptance within her enterprise. Though she has done a thorough job investigating various cloud computing models, she needs to put together a request for proposal (RFP) to start searching for outside help with her dilemma.

She begins by taking into consideration the three standard deployment models of cloud infrastructure and their hybrid combinations - as defined by the U.S. National Institute of Standards and Technology (NIST) - and determining whether these are a fit for her enterprise:

  • Private cloud: Provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units).
  • Public cloud: Provisioned for open use by the general public.
  • Community cloud: Provisioned for exclusive use by a specific community of consumers from organizations that have shared concerns (e.g., mission, security requirements, policy, and compliance considerations)
  • Hybrid cloud: A composition of two or more distinct cloud infrastructures (private, community, or public) that remain unique entities, but are bound together by standardized or proprietary technology that enables data and application portability (e.g., cloud bursting for load balancing between clouds)

"Death by Committee" is her thought after analyzing these models. Each is too narrow for her enterprise. These cloud deployment models only provide the "how" without truly understanding the "what" and "why" of her situation. The IT side of the decision is obvious - drive down operational and capital costs to give the IT team time and money to solve strategic issues for the business. The best way to do this is to standardize processes, automate tasks and share infrastructure and administrative resources as much as possible. This is, in general, what a cloud provides.

But IT is serving a set of key stakeholders who have requirements beyond infrastructure. These stakeholders are the application/business owners who rely on IT to support delivery of their revenue-generating services and products. They are not well served with the three deployment models defined above, nor are they served by a hybrid of those models.

A Look into Goldie's Enterprise
In general, an enterprise consists of distinct parts (such as business units) that serve different customers, have different financial results and offer different products and services. They are fairly autonomous, but all operate from a common set of financial resources and processes, a common strategy and common metrics that determine success.

Consider Goldie's enterprise, which is a consolidation of a number of state agencies:

Her enterprise is partitioned into two high level branches (State Police and Transportation), where each branch consists of multiple, semi-autonomous departments. Each of those departments is interested in controlling their own infrastructure. Furthermore, their security and administrative processes differ from one department to another. Different departments must comply with their own levels of privacy, availability and service metrics.

For example, the Department of Public Info Office, within the State Police branch, may require highly predictable, millisecond response times for public users. In this instance, it may make sense to use public cloud infrastructure for the web servers, since there may be a requirement to scale up very quickly to reach high workload demands.

The Department of the Deputy Commissioner, also within the State Police branch, may also require specific infrastructure services, processes, automation and regulations, such as "hardened" OS images and encryption for all transmission of information.

The Department of Highway Administration, within the Transportation branch, must guarantee that their web site is available 99.999% of the time. They may have high availability requirements that demand duplicate resources at a disaster recovery site, as well as requirements for high-availability configurations.

NIST Deployment Models and the Enterprise - Square Peg, Round Hole
Goldie's enterprise cloud must be structured to support these multiple "parts." In turn, these parts can themselves have parts, and so on. This is similar to many of today's enterprises, which are the result of consolidating other businesses and agencies that need to function in a semi-autonomous fashion, but are still members of the larger organization.

A cloud needs to be more than an infrastructure dispenser - providing small/medium/large chunks of infrastructure for each user, without considering the unique requirements for different groups of users. Goldie knows that today's cloud products and services do not meet the needs of her enterprise stakeholders. She would like to deploy a single, centralized enterprise cloud that allows business units and their sub-units to:

  • Share underlying virtual resources as one large collection of cloud resources
  • Allow end users, such as developers, testers, demonstrators and system admins, to use a simple service catalog to manage the lifecycle of all cloud resources in the same manner
  • Set up autonomous administration, with unique policies and processes, as required
  • Allow business units to deploy their entire spectrum of applications, with unique service level objectives for development, test, production, mission-critical and regulated workloads

Now, let's see why the current cloud models cannot address these requirements.

Private Clouds
Goldie has looked at all of the currently available private cloud products. "These are too small-minded," she thinks. Every private cloud offers "multi-tenancy," which allows each business unit to manage its allotted set of cloud resources. But none of them offers any additional structure beneath the first level. Many of Goldie's business units have their own autonomous sub-units that require unique policies, processes and resources. They will want their own cloud, which does not meet her first requirement.

Public Clouds
She then turns to the available public clouds. "They are big and cheap, but my stakeholders do not want to expose their mission-critical or regulated applications." She chuckles thinking about a specific security dink she knows who actually turned pale when she suggested a public cloud for his application. On the other hand, she is painfully aware of some development teams that are slipping under the radar and deploying virtual resources in a public cloud for test and development. It's cheap and cheerful, but it's not handled by the centralized IT department and it exposes the business to risks.

Community Clouds
A community cloud offers cloud resources to a like-minded set of users / administrators. These users have agency-specific requirements, such as service levels, privacy, etc. If individual community clouds are deployed, then Goldie cannot optimize the sharing of all of the cloud resources. "This just isn't right at all," she says.

Hybrid Clouds
The final NIST deployment model does not provide any capabilities over and above the first three models. Instead, it is defined as one or more distinct instantiations of either a private, public or community cloud. Goldie has looked at all of the hybrid cloud management services and products, compared them to her requirements and decided that it doesn't meet her needs.

The Rise of an Enterprise Cloud
Through her analysis of the traditional cloud models, Goldie concludes that none of them are quite right. What she's looking for is a cloud that can address requirements unique to her enterprise. Let's refer to this as an "Enterprise Cloud." An Enterprise Cloud provides the capabilities of private, public and community clouds within a single cloud management platform that can support heterogeneous processes and requirements.

Goldie eventually conceived of the Enterprise Cloud illustrated below. It consists of a blend of internal datacenter resources, as well as resources provided by one or more public clouds. These are the "raw ingredients" that are abstracted into "cloud resources." Each agency can choose the specific cloud resources it needs to meet its requirements, including high availability, speed of deployment, cost, compliance with regulations and low latency response times.

Cloud-wide administrators, as well as specific agency and sub-agency administrators, are responsible for managing cloud resources through one "single pane of glass" interface. Aside from the properties of the cloud resources, their life cycles are all managed in the same manner, independent of where the raw materials came from. The end users of the cloud (e.g., testers, developers, infrastructure administrators) can be isolated from the underlying source of the raw resources. For example, an application could use public cloud for its web-facing tier, a low-cost set of internal cloud resources for its application tier and a highly regulated, encrypted and hardened set of cloud resources for its data layer. Goldie thinks of this as a "Hybrid Enterprise Application."

Goldie concludes that she needs to strike out on her own and develop a unique RFP that reflects her mental image of an Enterprise Cloud. If she settles for the types of clouds that are enumerated in the NIST document, she will never convince the various stakeholders to share a single cloud.

By focusing on key requirements, such as a single management framework across the enterprise, using public clouds and the datacenter to store virtual resources and providing a hierarchical multi-level tenancy structure, Goldie decides that she has finally found an Enterprise Cloud that is "juuuuuust riiiight."

More Stories By Michael A. Salsburg

Dr. Michael Salsburg is a Distinguished Engineer and Chief Cloud Solutions Architect for Unisys Corporation. He holds two international patents in infrastructure performance modeling algorithms and software. In addition, he has published more than 60 papers and has lectured worldwide on real-time infrastructure, cloud computing and infrastructure optimization. In 2010, Dr. Salsburg received the A. A. Michelson Award from the Computer Measurement Group – its highest award for lifetime achievement.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevOps to advance innovation and increase agility. Specializing in designing, imple...
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on demos and comprehensive walkthroughs.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Everyone talks about continuous integration and continuous delivery but those are just two ends of the pipeline. In the middle of DevOps is continuous testing (CT), and many organizations are struggling to implement continuous testing effectively. After all, without continuous testing there is no delivery. And Lab-As-A-Service (LaaS) enhances the CT with dynamic on-demand self-serve test topologies. CT together with LAAS make a powerful combination that perfectly serves complex software development and delivery pipelines. Software Defined Networks (SDNs) turns the network into a flexible confi...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery cycles and a poor customer experience.
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ability. Many are unable to effectively engage and inspire, creating forward momentum in the direction...
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rather than mimicking legacy server virtualization workflows and architectures.
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for diverse candidates and taking feedback from the audience on their experiences with encouraging diver...
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a practical, powerful, and insanely valuable asset to enterprises. You may call it DevSecOps, or SecDevOps...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean, destroying it completely when you log out. If you wish to store your data, the solution will inclu...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approach. NoOps enables developers to deploy, manage, and scale their own code, creating an infrastructure...
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device access to health records while reducing operating costs and complying with government regulations.
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
DevOps is about increasing efficiency, but nothing is more inefficient than building the same application twice. However, this is a routine occurrence with enterprise applications that need both a rich desktop web interface and strong mobile support. With recent technological advances from Isomorphic Software and others, rich desktop and tuned mobile experiences can now be created with a single codebase – without compromising functionality, performance or usability. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will demonstrate examples of...
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usage of their services for licensing and billing purposes? In his session at 16th Cloud Expo, Delano ...
There is no question that the cloud is where businesses want to host data. Until recently hypervisor virtualization was the most widely used method in cloud computing. Recently virtual containers have been gaining in popularity, and for good reason. In the debate between virtual machines and containers, the latter have been seen as the new kid on the block - and like other emerging technology have had some initial shortcomings. However, the container space has evolved drastically since coming onto the cloud hosting scene over 10 years ago. So, what has changed? In his session at 16th Cloud Ex...
XebiaLabs has announced that XL Deploy, its Application Release Automation software, has received certification of its integration with ServiceNow. With XL Deploy from XebiaLabs, ServiceNow users can now easily automate the application deployment process so releases can occur in a repeatable, standard and efficient way leading to faster delivery of software at enterprise scale. XL Deploy also enables companies to reduce the risk of release failures, while providing comprehensive reporting and supporting IT compliance. Certification by ServiceNow signifies that XL Deploy has successfully co...
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
Between the compelling mockups and specs produced by your analysts and designers, and the resulting application built by your developers, there is a gulf where projects fail, costs spiral out of control, and applications fall short of requirements. In his session at @DevOpsSummit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, presented a new approach where business and development users collaborate – each using tools appropriate to their goals and expertise – to build mockups and enhance them all the way through functional prototypes, to final working applications. Learn ...
Automic has been listed as a representative ‘established and active vendor’ in Gartner’s recent Market Guide for Application Release Automation (ARA) Solutions. Gartner has defined categories of ‘established and active’, ‘evolving’ and ‘emerging’ and categorized vendors accordingly. Automic views the growing global DevOps market as a strategic area of focus for the business. The ARA market is, “Driven by growing business demands for rapid (if not continuous) delivery of new applications, features and updates.” Furthermore, “enterprise infrastructure and operations (I&O) leaders invest in ARA...
Graylog, Inc., has added the capability to collect, centralize and analyze application container logs from within Docker. The Graylog logging driver for Docker addresses the challenges of extracting intelligence from within Docker containers, where most workloads are dynamic and log data is not persisted or stored. Using Graylog, DevOps and IT Ops teams can pinpoint the root cause of problems to deliver new applications faster and minimize downtime.
Scrum Alliance has announced the release of its 2015 State of Scrum Report. Almost 5,000 individuals and companies worldwide participated in this year's survey. Most organizations in the market today are still leading and managing under an Industrial Age model. Not only is the speed of change growing exponentially, Agile and Scrum frameworks are showing companies how to draw on the full talents and capabilities of those doing the work in order to continue innovating for success.