Welcome!

DevOps Journal Authors: Pat Romanski, Yeshim Deniz, Sandi Mappic, Lori MacVittie, Bernard Golden

Related Topics: Big Data Journal, Java, SOA & WOA, Virtualization, Cloud Expo, SDN Journal, DevOps Journal

Big Data Journal: Blog Feed Post

What DevOps Can Do About Cloud's Predictable Provisioning Problem

Cloud and software-defined architectures have brought to the fore the critical nature of load balancing

Go ahead. Name a cloud environment that doesn't include load balancing as the key enabler of elastic scalability. I've got coffee... so it's good, take your time...

Exactly. Load balancing - whether implemented as traditional high availability pairs or clustering - provides the means by which applications (and infrastructure, in many cases) scale horizontally. It is load balancing that is at the heart of elastic scalability models, and that provides a means to ensure availability and even improve performance of applications.

But simple load balancing alone isn't enough. Too many environments and architectures are wont to toss a simple, network-based solution at the problem and call it a day. But rudimentary load balancing techniques that rely solely on a set of metrics are doomed to fail eventually. That's because a simple number like "connection count" does not provide enough context to make an intelligent load balancing decision. An application instance may currently have only 100 connections while another has 500, but if the capacity of the former is only 200 while the capacity of the other is 5000, a decision based on "least connections" is not the right one.

Application-aware networking tells us that load balancing decisions - even rudimentary ones - should be made based on a variety of variables such as application load, response time, and capacity. That means a modern load balancing service capable of not just tracking these metrics but gathering them from the application instances under management.

(Un)Predictable Provisioning
In data centers, it is best practice to deploy application instances on similarly capable hardware. This is because doing so provides predictable capacity and performance that can be used to better scale an application and ensure compliance with service level expectations.

When moving to a cloud environment - whether public or private - this practice can be lost. In the public cloud, that's because you have no control over the underlying hardware capabilities - you can only specific the compute capabilities of an instance. In a private cloud, you have more control over this but may not have provisioning systems intelligent enough to provide the visibility you need to make a provisioning decision in real time.

That can lead to problems. Consider this nugget from a recent blog post:

One thing that I’ve learned is that you can end up on a variety of different hardware but they don’t always act the same. Stackdriver has been a great help with this. For example, if we’re firing up 6 web servers, Stackdriver can help us see that 5 are cruising along at 20% CPU, while one is at 50% CPU. It allows us to see and address that anomaly.

http://www.stackdriver.com/devops-focus-matt-trescot-studyblue/

Let's assume, for a moment, this is true. Because it can be. Anyone who's ever dealt with hardware servers knows it's true - hardware, though matched in terms of basic capacity, can wind up performing differently. That's due to a number of things including the natural degradation of capacity over time due to "wear and tear" as well as the possibility of misconfiguration or the presence of some other artifact or code that may be eating up cycles. operational axiom 2a

In any case, the reason is not as important as the fact that this happens. It's important because we know operational axiom #2: as load increases, performance decreases. It also follows that as load increases, capacity decreases because, well, capacity and load go hand in hand.

Thus, in a cloud environment the aforementioned situation presents a problem: one of the "servers" is at a disadvantage and is not going to perform as well as the other five. Not only that, but its capacity as understood (and likely configured manually) by the load balancing is now inaccurate. The load balancing service believes all six servers have a capacity of X connections, but the reality is that a higher CPU utilization rate can reduce that.

A simple load balancing service is not going to adjust because it doesn't have the visibility or intelligence to make that connection. Whether the service is configured to use round robin (almost never a good idea) or a least connections (can be an acceptable choice if all other factors are predictable) algorithm, service levels are going to degrade unless the service is aware enough to recognize the discordance occurring.

Thus, we end up with a situation in which predictable performance and availability are, well, not necessarily predictable. Which introduces operational risk that must, somehow be countered.

Correcting for Unpredictable Provisioning

state-of-apm-issuesIn enterprise-class data centers, application aware networking services are able to factor in not just connection counts and response times, but server load and a variety of other variables that can offset the unpredictability of provisioning processes. As noted earlier, application-aware load balancing services have the visibility and programmability necessary to monitor and measure the status of application instances and servers for a variety of metrics including CPU utilization (load).

What's perhaps even more interesting is that programmability enables extensibility of gathering and monitoring those statistics. If the application instance can present a variable which you deem critical for making load balancing decisions, programmability of the load balancing service makes it possible to incorporate that variable into its algorithm (or create a completely new one, if that's what it takes).

All these factors combine to answer the question, "Why does the network need to be dynamic?" or "Why do we need SD<insert preferred "N" or "DC" here>?"

Dynamic implies an ability to react in the face of unanticipated (unpredictable) situations. Unpredictable provisioning that can result in inconsistent capacity and performance has to be countered somewhere, and that somewhere is going to be upstream of the application instances exhibiting erratic behavior. Upstream is usually (and almost always in any of today's scalable architectures) an ADC or load balancing service.

That load balancing service must be application-aware and programmable if it's going to execute on its mission of maintaining performance and availability of applications in the face of the potentially unpredictable provisioning processes of cloud computing environments.

DevOps: More than just deployment
DevOps practitioners must become adept at not only understanding the complex relationships between performance and availability and capacity and load, but how to turn those business and operational expectations into reality by taking advantage of both application and network infrastructure capabilities.

DevOps isn't, after all, just about scripting and automation. Those are tools that enable devops practitioners to do something, and that something is more than just deploying apps - it's delivering them, too.

•   •   •

Excerpt from the State of APM Infographic courtesy of Germain Software, LLC.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Cloud Expo Breaking News
As more applications and services move "to the cloud" (public or on-premise) cloud environments are increasingly adopting and building out traditional enterprise features. This in turn is enabling and encouraging cloud adoption from enterprise users. In many ways the definition is blurring as features like continuous operation, geo-distribution or on-demand capacity become the norm. NuoDB is involved in both building enterprise software and using enterprise cloud capabilities. In his session at 15th Cloud Expo, Seth Proctor, CTO at NuoDB, Inc., will discuss the experiences from building, deploying and using enterprise services and suggest some ways to approach moving enterprise applications into a cloud model.
Until recently, many organizations required specialized departments to perform mapping and geospatial analysis, and they used Esri on-premise solutions for that work. In his session at 15th Cloud Expo, Dave Peters, author of the Esri Press book Building a GIS, System Architecture Design Strategies for Managers, will discuss how Esri has successfully included the cloud as a fully integrated SaaS expansion of the ArcGIS mapping platform. Organizations that have incorporated Esri cloud-based applications and content within their business models are reaping huge benefits by directly leveraging cloud-based mapping and analysis capabilities within their existing enterprise investments. The ArcGIS mapping platform includes cloud-based content management and information resources to more widely, efficiently, and affordably deliver real-time actionable information and analysis capabilities to your organization.
In his session at 15th Cloud Expo, Mark Hinkle, Senior Director, Open Source Solutions at Citrix Systems Inc., will provide overview of the open source software that can be used to deploy and manage a cloud computing environment. He will include information on storage, networking(e.g., OpenDaylight) and compute virtualization (Xen, KVM, LXC) and the orchestration(Apache CloudStack, OpenStack) of the three to build their own cloud services. Speaker Bio: Mark Hinkle is the Senior Director, Open Source Solutions, at Citrix Systems Inc. He joined Citrix as a result of their July 2011 acquisition of Cloud.com where he was their Vice President of Community. He is currently responsible for Citrix open source efforts around the open source cloud computing platform, Apache CloudStack and the Xen Hypervisor. Previously he was the VP of Community at Zenoss Inc., a producer of the open source application, server, and network management software, where he grew the Zenoss Core project to over 10...
Almost everyone sees the potential of Internet of Things but how can businesses truly unlock that potential. The key will be in the ability to discover business insight in the midst of an ocean of Big Data generated from billions of embedded devices via Systems of Discover. Businesses will also need to ensure that they can sustain that insight by leveraging the cloud for global reach, scale and elasticity. In his session at Internet of @ThingsExpo, Mac Devine, Distinguished Engineer at IBM, will discuss bringing these three elements together via Systems of Discover.
Cloud and Big Data present unique dilemmas: embracing the benefits of these new technologies while maintaining the security of your organization’s assets. When an outside party owns, controls and manages your infrastructure and computational resources, how can you be assured that sensitive data remains private and secure? How do you best protect data in mixed use cloud and big data infrastructure sets? Can you still satisfy the full range of reporting, compliance and regulatory requirements? In his session at 15th Cloud Expo, Derek Tumulak, Vice President of Product Management at Vormetric, will discuss how to address data security in cloud and Big Data environments so that your organization isn’t next week’s data breach headline.
The cloud is everywhere and growing, and with it SaaS has become an accepted means for software delivery. SaaS is more than just a technology, it is a thriving business model estimated to be worth around $53 billion dollars by 2015, according to IDC. The question is – how do you build and scale a profitable SaaS business model? In his session at 15th Cloud Expo, Jason Cumberland, Vice President, SaaS Solutions at Dimension Data, will give the audience an understanding of common mistakes businesses make when transitioning to SaaS; how to avoid them; and how to build a profitable and scalable SaaS business.
SYS-CON Events announced today that Gridstore™, the leader in software-defined storage (SDS) purpose-built for Windows Servers and Hyper-V, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Gridstore™ is the leader in software-defined storage purpose built for virtualization that is designed to accelerate applications in virtualized environments. Using its patented Server-Side Virtual Controller™ Technology (SVCT) to eliminate the I/O blender effect and accelerate applications Gridstore delivers vmOptimized™ Storage that self-optimizes to each application or VM across both virtual and physical environments. Leveraging a grid architecture, Gridstore delivers the first end-to-end storage QoS to ensure the most important App or VM performance is never compromised. The storage grid, that uses Gridstore’s performance optimized nodes or capacity optimized nodes, starts with as few a...
SYS-CON Events announced today that Solgenia, the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions, will exhibit at SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between personal and professional social, mobile and cloud user experiences, our solutions help large and medium-sized organizations dramatically improve productivity, reduce collaboration costs, and increase the overall enterprise value by bringing collaboration and infrastructure solutions to the cloud.
Cloud computing started a technology revolution; now DevOps is driving that revolution forward. By enabling new approaches to service delivery, cloud and DevOps together are delivering even greater speed, agility, and efficiency. No wonder leading innovators are adopting DevOps and cloud together! In his session at DevOps Summit, Andi Mann, Vice President of Strategic Solutions at CA Technologies, will explore the synergies in these two approaches, with practical tips, techniques, research data, war stories, case studies, and recommendations.
Enterprises require the performance, agility and on-demand access of the public cloud, and the management, security and compatibility of the private cloud. The solution? In his session at 15th Cloud Expo, Simone Brunozzi, VP and Chief Technologist(global role) for VMware, will explore how to unlock the power of the hybrid cloud and the steps to get there. He'll discuss the challenges that conventional approaches to both public and private cloud computing, and outline the tough decisions that must be made to accelerate the journey to the hybrid cloud. As part of the transition, an Infrastructure-as-a-Service model will enable enterprise IT to build services beyond their data center while owning what gets moved, when to move it, and for how long. IT can then move forward on what matters most to the organization that it supports – availability, agility and efficiency.
Every healthy ecosystem is diverse. This is especially true in cloud ecosystems, where portability and interoperability are more important than old enterprise models of proprietary ownership. In his session at 15th Cloud Expo, Mark Baker, Server Product Manager at Canonical/Ubuntu, will discuss how single vendors used to take the lead in creating and delivering technology, but in a cloud economy, where users want tools of their preference, when and where they need them, it makes no sense.
The 15th International Cloud Expo has just expanded its conference program, to bring together Cloud Computing, APM, APIs, Security, Big Data, Internet of Things, DevOps and WebRTC at one location. Cloud Expo is the single show where delegates and technology vendors can meet to experience and discuss the entire world of the cloud. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to learn about the latest technology developments and solutions.
SYS-CON Events announced today that Bsquare Corporation, a leading enabler of smart connected systems, has been named “Bronze Sponsor” of SYS-CON's Internet of @ThingsExpo, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Bsquare is a global leader of embedded software solutions. We enable smart connected systems at the device level and beyond that millions use every day and provide actionable data solutions for the growing Internet of Things (IoT) market. We empower our world-class customers with our products, services and solutions to achieve innovation and success.
SYS-CON Events announced today that NuoDB, Inc., the leader in webscale distributed database technology, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. NuoDB was launched in 2010 by industry-renowned database architect Jim Starkey and accomplished software CEO Barry Morris to deliver a webscale distributed database management system that is specifically designed for the cloud and the modern datacenter.
SYS-CON Events announced today that Cloudian, Inc., the leading provider of hybrid cloud storage solutions, has been named “Bronze Sponsor” of SYS-CON's 15th International Cloud Expo®, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Cloudian is a Foster City, Calif.-based software company specializing in cloud storage. Cloudian HyperStore® is an S3-compatible cloud object storage platform that enables service providers and enterprises to build reliable, affordable and scalable hybrid cloud storage solutions. Cloudian actively partners with leading cloud computing environments including Amazon Web Services, Citrix Cloud Platform, Apache CloudStack, OpenStack and the vast ecosystem of S3 compatible tools and applications. Cloudian's customers include Vodafone, Nextel, NTT, Nifty, and LunaCloud. The company has additional offices in China and Japan.