Welcome!

DevOps Journal Authors: Pat Romanski, Yeshim Deniz, Elizabeth White, Carmen Gonzalez, Aater Suleman

Related Topics: Big Data Journal, Java, SOA & WOA, Virtualization, Cloud Expo, SDN Journal, DevOps Journal

Big Data Journal: Blog Feed Post

What DevOps Can Do About Cloud's Predictable Provisioning Problem

Cloud and software-defined architectures have brought to the fore the critical nature of load balancing

Go ahead. Name a cloud environment that doesn't include load balancing as the key enabler of elastic scalability. I've got coffee... so it's good, take your time...

Exactly. Load balancing - whether implemented as traditional high availability pairs or clustering - provides the means by which applications (and infrastructure, in many cases) scale horizontally. It is load balancing that is at the heart of elastic scalability models, and that provides a means to ensure availability and even improve performance of applications.

But simple load balancing alone isn't enough. Too many environments and architectures are wont to toss a simple, network-based solution at the problem and call it a day. But rudimentary load balancing techniques that rely solely on a set of metrics are doomed to fail eventually. That's because a simple number like "connection count" does not provide enough context to make an intelligent load balancing decision. An application instance may currently have only 100 connections while another has 500, but if the capacity of the former is only 200 while the capacity of the other is 5000, a decision based on "least connections" is not the right one.

Application-aware networking tells us that load balancing decisions - even rudimentary ones - should be made based on a variety of variables such as application load, response time, and capacity. That means a modern load balancing service capable of not just tracking these metrics but gathering them from the application instances under management.

(Un)Predictable Provisioning
In data centers, it is best practice to deploy application instances on similarly capable hardware. This is because doing so provides predictable capacity and performance that can be used to better scale an application and ensure compliance with service level expectations.

When moving to a cloud environment - whether public or private - this practice can be lost. In the public cloud, that's because you have no control over the underlying hardware capabilities - you can only specific the compute capabilities of an instance. In a private cloud, you have more control over this but may not have provisioning systems intelligent enough to provide the visibility you need to make a provisioning decision in real time.

That can lead to problems. Consider this nugget from a recent blog post:

One thing that I’ve learned is that you can end up on a variety of different hardware but they don’t always act the same. Stackdriver has been a great help with this. For example, if we’re firing up 6 web servers, Stackdriver can help us see that 5 are cruising along at 20% CPU, while one is at 50% CPU. It allows us to see and address that anomaly.

http://www.stackdriver.com/devops-focus-matt-trescot-studyblue/

Let's assume, for a moment, this is true. Because it can be. Anyone who's ever dealt with hardware servers knows it's true - hardware, though matched in terms of basic capacity, can wind up performing differently. That's due to a number of things including the natural degradation of capacity over time due to "wear and tear" as well as the possibility of misconfiguration or the presence of some other artifact or code that may be eating up cycles. operational axiom 2a

In any case, the reason is not as important as the fact that this happens. It's important because we know operational axiom #2: as load increases, performance decreases. It also follows that as load increases, capacity decreases because, well, capacity and load go hand in hand.

Thus, in a cloud environment the aforementioned situation presents a problem: one of the "servers" is at a disadvantage and is not going to perform as well as the other five. Not only that, but its capacity as understood (and likely configured manually) by the load balancing is now inaccurate. The load balancing service believes all six servers have a capacity of X connections, but the reality is that a higher CPU utilization rate can reduce that.

A simple load balancing service is not going to adjust because it doesn't have the visibility or intelligence to make that connection. Whether the service is configured to use round robin (almost never a good idea) or a least connections (can be an acceptable choice if all other factors are predictable) algorithm, service levels are going to degrade unless the service is aware enough to recognize the discordance occurring.

Thus, we end up with a situation in which predictable performance and availability are, well, not necessarily predictable. Which introduces operational risk that must, somehow be countered.

Correcting for Unpredictable Provisioning

state-of-apm-issuesIn enterprise-class data centers, application aware networking services are able to factor in not just connection counts and response times, but server load and a variety of other variables that can offset the unpredictability of provisioning processes. As noted earlier, application-aware load balancing services have the visibility and programmability necessary to monitor and measure the status of application instances and servers for a variety of metrics including CPU utilization (load).

What's perhaps even more interesting is that programmability enables extensibility of gathering and monitoring those statistics. If the application instance can present a variable which you deem critical for making load balancing decisions, programmability of the load balancing service makes it possible to incorporate that variable into its algorithm (or create a completely new one, if that's what it takes).

All these factors combine to answer the question, "Why does the network need to be dynamic?" or "Why do we need SD<insert preferred "N" or "DC" here>?"

Dynamic implies an ability to react in the face of unanticipated (unpredictable) situations. Unpredictable provisioning that can result in inconsistent capacity and performance has to be countered somewhere, and that somewhere is going to be upstream of the application instances exhibiting erratic behavior. Upstream is usually (and almost always in any of today's scalable architectures) an ADC or load balancing service.

That load balancing service must be application-aware and programmable if it's going to execute on its mission of maintaining performance and availability of applications in the face of the potentially unpredictable provisioning processes of cloud computing environments.

DevOps: More than just deployment
DevOps practitioners must become adept at not only understanding the complex relationships between performance and availability and capacity and load, but how to turn those business and operational expectations into reality by taking advantage of both application and network infrastructure capabilities.

DevOps isn't, after all, just about scripting and automation. Those are tools that enable devops practitioners to do something, and that something is more than just deploying apps - it's delivering them, too.

•   •   •

Excerpt from the State of APM Infographic courtesy of Germain Software, LLC.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
Software development, like manufacturing, is a craft that requires the application of creative approaches to solve problems given a wide range of constraints. However, while engineering design may be craftwork, the production of most designed objects relies on a standardized and automated manufacturing process. By contrast, much of moving an application from prototype to production and, indeed, maintaining the application through its lifecycle has often remained craftwork. In his session at De...
In a world of ever-accelerating business cycles and fast-changing client expectations, the cloud increasingly serves as a growth engine and a path to new business models. Dynamic clouds enable businesses to continuously reinvent themselves, adapting their business processes, their service and software delivery and their operations to achieve speed-to-market and quick response to customer feedback. As the cloud evolves, the industry has multiple competing cloud technologies, offering on-premises ...
The old monolithic style of building enterprise applications just isn't cutting it any more. It results in applications and teams both that are complex, inefficient, and inflexible, with considerable communication overhead and long change cycles. Microservices architectures, while they've been around for a while, are now gaining serious traction with software organizations, and for good reasons: they enable small targeted teams, rapid continuous deployment, independent updates, true polyglot lan...
Docker offers a new, lightweight approach to application portability. Applications are shipped using a common container format and managed with a high-level API. Their processes run within isolated namespaces that abstract the operating environment independently of the distribution, versions, network setup, and other details of this environment. This "containerization" has often been nicknamed "the new virtualization." But containers are more than lightweight virtual machines. Beyond their small...
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option. On site registration price of $1,95 will be set to 'free' for delegates who register during this offer perios. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site.
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option. On site registration price of $1,95 will be set to 'free' for delegates who register during this offer perios. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site.
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo--to be held November 4-6 at the Santa Clara Convention Center in the heart of Silicon Valley--will expand the DevO...
We had three quick questions for Mike Kail, and he had three quick answers. Mike is SVP of Infrastructure at Yahoo!, and formerly VP of IT Operations at Netflix. He'll be speaking at @DevOpsSummit about his experiences in integrating DevOps on a big scale in big-scale projects. Here's what we asked and what he said: DevOps Journal: You mention “eventual consistency” as a goal. Is there a timeframe? Mike Kail: It is really a strategy for successful transformation instead of a strict ...
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. The DevOps Summit at Cloud Expo--to be held November 4-6 at the Santa Clara Convention Center in the heart of Silicon Valley--will expand the DevO...
Having just joined a large technology company with 20 years of history, it would be suicidal to believe that I can immediately move the entire organization to the DevOps mindset and model. For those not familiar with the term, “Eventual Consistency” is a model used in distributed computing to ensure high availability. In this context, it’s a model for replicating best practices and automation across IT teams and business units. The logical place to start with automation is the on-boarding of a ...
Software is eating the world. Companies that were not previously in the technology space now find themselves competing with Google and Amazon on speed of innovation. As the innovation cycle accelerates, companies must embrace rapid and constant change to both applications and their infrastructure, and find a way to deliver speed and agility of development without sacrificing reliability or efficiency of operations. In her keynote DevOps Summit, Victoria Livschitz, CEO of Qubell, will discuss ho...
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option through September. On site registration price of $1,95 will be set to 'free' for delegates who register during special offer. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site. ...
Despite the fact that majority of developers firmly believe that “it worked on my laptop” is a poor excuse for production failures, most don’t truly understand why it is virtually impossible to make your development environment representative of production. When asked, the primary reason for the production/development difference everyone mentions is technology stack spec/configuration differences. While it’s true, thanks to the black magic of Cloud (capitalization intended) with a bit of wizard...
SYS-CON Events announced today that AppDynamics will exhibit at DevOps Summit Silicon Valley, which will take place November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Digital businesses like yours need a way to turn data into actual results. AppDynamics is ushering in the next digital age – the age of the software-defined business. AppDynamics’ mission is to deliver true application intelligence that helps your software-defined business run faster, leaner, and more ef...

BOULDER, Colo., Sept. 24, 2014 /PRNewswire/ -- VictorOps, the leading collaboration and incident management platform for DevOps teams, is hosting a webinar that will discuss how to take full advantage of your project post-mortems with or without a template.

DevOps Journal: Cloud, Big Data, and the IoT all carry disruption within enterprise IT. The same goes with DevOps. Which of these is the major disruptor, in your opinion? Andi Mann: It may well be cloud, because it fundamentally enables all the rest. Cloud scale is why we are now considering Big Data; cloud connectivity is a key enabler of IoT; cloud agility has enabled DevOps to take hold. But in the end, the cloud is “just” a platform, while the results of DevOps speak for themselves--l...
These days, implementing automatic deployment for .NET web projects is easier than ever. Drastic improvements started in Visual Studio 2010 when basic deployment strategies and tools were incorporated into VS itself. Yet, documentation was quite poor at that time, so you had to scour the Internet to find good tutorials in blogs or conference videos. Things have been constantly improving since then; now, we have even more functionality available out-of-the-box and documentation provided in a way ...
Azul Systems Inc. (Azul), the award-winning leader in Java runtime solutions, today announced that its OpenJDK-based Zulu 8 offering is now freely available on Docker. Zulu 8 is a 100% open source, fully tested, compatibility verified, and trusted binary distribution of the OpenJDK 8 platform. Azul has also made Zulu versions compliant with earlier Java SE 7 and Java SE 6 standards available on Docker in the same format.
Founded in 1997, ActiveState is a global leader providing software application development and management solutions. The Company's products include: Stackato, a commercially supported Platform-as-a-Service (PaaS) that harnesses open source technologies such as Cloud Foundry and Docker; dynamic language distributions ActivePerl, ActivePython and ActiveTcl; and developer tools such as the popular Komodo Edit and Komodo IDE. Headquartered in Vancouver, Canada, ActiveState is trusted by customers an...
Leading provider of Continuous Delivery and DevOps software XebiaLabs today announced enhanced integration between Puppet and XebiaLabs' XL Deploy, the deployment automation solution that supports DevOps and Continuous Delivery teams. XL Deploy in combination with Puppet means one seamless automation process to deploy your apps.