Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Zakia Bouachraoui, Liz McMillan, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Containers Expo Blog, SDN Journal

@DevOpsSummit: Article

Cloud's Provisioning Problem | @DevOpsSummit #IoT #DevOps #BigData

Cloud and software-defined architectures have brought to the fore the critical nature of load balancing

What DevOps Can Do About Cloud's Predictable Provisioning Problem

Go ahead. Name a cloud environment that doesn't include load balancing as the key enabler of elastic scalability. I've got coffee... so it's good, take your time...

Exactly. Load balancing - whether implemented as traditional high availability pairs or clustering - provides the means by which applications (and infrastructure, in many cases) scale horizontally. It is load balancing that is at the heart of elastic scalability models, and that provides a means to ensure availability and even improve performance of applications.

But simple load balancing alone isn't enough. Too many environments and architectures are wont to toss a simple, network-based solution at the problem and call it a day. But rudimentary load balancing techniques that rely solely on a set of metrics are doomed to fail eventually. That's because a simple number like "connection count" does not provide enough context to make an intelligent load balancing decision. An application instance may currently have only 100 connections while another has 500, but if the capacity of the former is only 200 while the capacity of the other is 5000, a decision based on "least connections" is not the right one.

Application-aware networking tells us that load balancing decisions - even rudimentary ones - should be made based on a variety of variables such as application load, response time, and capacity. That means a modern load balancing service capable of not just tracking these metrics but gathering them from the application instances under management.

(Un)Predictable Provisioning
In data centers, it is best practice to deploy application instances on similarly capable hardware. This is because doing so provides predictable capacity and performance that can be used to better scale an application and ensure compliance with service level expectations.

When moving to a cloud environment - whether public or private - this practice can be lost. In the public cloud, that's because you have no control over the underlying hardware capabilities - you can only specific the compute capabilities of an instance. In a private cloud, you have more control over this but may not have provisioning systems intelligent enough to provide the visibility you need to make a provisioning decision in real time.

That can lead to problems. Consider this nugget from a recent blog post:

One thing that I’ve learned is that you can end up on a variety of different hardware but they don’t always act the same. Stackdriver has been a great help with this. For example, if we’re firing up 6 web servers, Stackdriver can help us see that 5 are cruising along at 20% CPU, while one is at 50% CPU. It allows us to see and address that anomaly.

http://www.stackdriver.com/devops-focus-matt-trescot-studyblue/

Let's assume, for a moment, this is true. Because it can be. Anyone who's ever dealt with hardware servers knows it's true - hardware, though matched in terms of basic capacity, can wind up performing differently. That's due to a number of things including the natural degradation of capacity over time due to "wear and tear" as well as the possibility of misconfiguration or the presence of some other artifact or code that may be eating up cycles. operational axiom 2a

In any case, the reason is not as important as the fact that this happens. It's important because we know operational axiom #2: as load increases, performance decreases. It also follows that as load increases, capacity decreases because, well, capacity and load go hand in hand.

Thus, in a cloud environment the aforementioned situation presents a problem: one of the "servers" is at a disadvantage and is not going to perform as well as the other five. Not only that, but its capacity as understood (and likely configured manually) by the load balancing is now inaccurate. The load balancing service believes all six servers have a capacity of X connections, but the reality is that a higher CPU utilization rate can reduce that.

A simple load balancing service is not going to adjust because it doesn't have the visibility or intelligence to make that connection. Whether the service is configured to use round robin (almost never a good idea) or a least connections (can be an acceptable choice if all other factors are predictable) algorithm, service levels are going to degrade unless the service is aware enough to recognize the discordance occurring.

Thus, we end up with a situation in which predictable performance and availability are, well, not necessarily predictable. Which introduces operational risk that must, somehow be countered.

Correcting for Unpredictable Provisioning

state-of-apm-issuesIn enterprise-class data centers, application aware networking services are able to factor in not just connection counts and response times, but server load and a variety of other variables that can offset the unpredictability of provisioning processes. As noted earlier, application-aware load balancing services have the visibility and programmability necessary to monitor and measure the status of application instances and servers for a variety of metrics including CPU utilization (load).

What's perhaps even more interesting is that programmability enables extensibility of gathering and monitoring those statistics. If the application instance can present a variable which you deem critical for making load balancing decisions, programmability of the load balancing service makes it possible to incorporate that variable into its algorithm (or create a completely new one, if that's what it takes).

All these factors combine to answer the question, "Why does the network need to be dynamic?" or "Why do we need SD<insert preferred "N" or "DC" here>?"

Dynamic implies an ability to react in the face of unanticipated (unpredictable) situations. Unpredictable provisioning that can result in inconsistent capacity and performance has to be countered somewhere, and that somewhere is going to be upstream of the application instances exhibiting erratic behavior. Upstream is usually (and almost always in any of today's scalable architectures) an ADC or load balancing service.

That load balancing service must be application-aware and programmable if it's going to execute on its mission of maintaining performance and availability of applications in the face of the potentially unpredictable provisioning processes of cloud computing environments.

DevOps: More than just deployment
DevOps practitioners must become adept at not only understanding the complex relationships between performance and availability and capacity and load, but how to turn those business and operational expectations into reality by taking advantage of both application and network infrastructure capabilities.

DevOps isn't, after all, just about scripting and automation. Those are tools that enable devops practitioners to do something, and that something is more than just deploying apps - it's delivering them, too.

•   •   •

Excerpt from the State of APM Infographic courtesy of Germain Software, LLC.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Enterprises are universally struggling to understand where the new tools and methodologies of DevOps fit into their organizations, and are universally making the same mistakes. These mistakes are not unavoidable, and in fact, avoiding them gifts an organization with sustained competitive advantage, just like it did for Japanese Manufacturing Post WWII.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.