Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Dalibor Siroky, Stackify Blog

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, @CloudExpo, @DXWorldExpo, SDN Journal

@DevOpsSummit: Blog Post

Why Scaling Down Is Harder Than Scaling Up

Elasticity should be non-disruptive

Elasticity is hailed as one of the biggest benefits of cloud and software-defined architectures. It's more efficient than traditional scalability models that only went one direction: up. It's based on the premise that wasting money and resources all the time just to ensure capacity on a seasonal or periodic basis is not only unappealing, but unnecessary in the age of software-defined everything.

The problem is that scaling down is much, much harder than scaling up. Oh, not from the perspective of automation and orchestration. That is, as the kids say these days, easy peasy lemon squeezy. APIs have made the ability to add and remove resources simplicity itself. There isn't a load balancing service available today without this capability - at least not one that's worth having.

But if you peek behind the kimono, you're going to quickly find out that perhaps it isn't as easy peasy as it first appears. Scaling down requires a bit more finesse than scaling up, at least if you care about the end user experience.

Consider for a moment the process of scaling up (or out, as is more often the case). A certain threshold is reached that indicates a need for more capacity. More often than not this metric is based on connections, with each application instance able to handle an approximate number of connections. When that is reached, a new application instance (node) is launched, it's added to the load balanced pool, and voila! More capacity, more connections, more consumers.

awkward elasticity

At some point that same threshold is breached on the way back down. As capacity demand wanes, the total connection count (as measured by all connections across all instances) decreases. When that count crossed a pre-determined threshold, one instance is shut down, it's removed from the pool, and voila!

You've just disconnected a whole bunch of consumers - many of whom might have been in the middle of a transaction (you know, trying to give you money).

Needless to say, they aren't happy.

That's because many of the systems implementing elastic scale these days haven't been load balancing applications for nearly twenty years and didn't recognize the importance of graceful degradation, or quiescence.

Graceful degradation was used in the olden days (and still is today, to be fair) when initiating a maintenance cycle. The requirement was that a server (today an instance) needed to be shut down for maintenance but it could not disrupt current user sessions. The load balancer would be "told" to begin quiescing (or bleeding off) connections in preparation for downtime. The service would immediately stop sending new connections to that server (instance) while allowing existing connections to complete. In this way, consumers could complete their tasks and when no connections were left, the server would be shut down and maintenance could begin.

This is not a simple thing to achieve. The load balancing service must be intelligent enough to stop sending new connections to an instance but not interrupt existing connections. It must be able to manage both active and semi-active instances for the same application, which requires stateful management across all application instances.

In today's models, this means that elasticity must embrace a more graceful, elegant means of scaling down than just suddenly killing an instance and all its associated connections.

With more and more apps - both those aimed at employees and those designed for consumers - residing in cloud environments and managed service environments, it is critical to evaluate how such providers support elasticity. Methods that disrupt productivity or interrupt consumer transactions are hardly worth the few pennies saved by immediately shutting down an instance when a threshold is crossed.

It behooves emerging software-defined models and devops that plan on managing scale automatically to recognize that elasticity isn't just about responding to thresholds; it's about responding seamlessly - in both directions.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today's digital transformation.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.