Welcome!

DevOps Journal Authors: Pat Romanski, Roger Strukhoff, Elizabeth White, Liz McMillan, Mike Kavis

Related Topics: DevOps Journal, Java, Linux, Cloud Expo, Big Data Journal, SDN Journal

DevOps Journal: Blog Feed Post

@DevOps Summit | Why Scaling Down Is Harder Than Scaling Up [#DevOps]

Elasticity should be non-disruptive

Elasticity is hailed as one of the biggest benefits of cloud and software-defined architectures. It's more efficient than traditional scalability models that only went one direction: up. It's based on the premise that wasting money and resources all the time just to ensure capacity on a seasonal or periodic basis is not only unappealing, but unnecessary in the age of software-defined everything.

The problem is that scaling down is much, much harder than scaling up. Oh, not from the perspective of automation and orchestration. That is, as the kids say these days, easy peasy lemon squeezy. APIs have made the ability to add and remove resources simplicity itself. There isn't a load balancing service available today without this capability - at least not one that's worth having.

But if you peek behind the kimono, you're going to quickly find out that perhaps it isn't as easy peasy as it first appears. Scaling down requires a bit more finesse than scaling up, at least if you care about the end user experience.

Consider for a moment the process of scaling up (or out, as is more often the case). A certain threshold is reached that indicates a need for more capacity. More often than not this metric is based on connections, with each application instance able to handle an approximate number of connections. When that is reached, a new application instance (node) is launched, it's added to the load balanced pool, and voila! More capacity, more connections, more consumers.

awkward elasticity

At some point that same threshold is breached on the way back down. As capacity demand wanes, the total connection count (as measured by all connections across all instances) decreases. When that count crossed a pre-determined threshold, one instance is shut down, it's removed from the pool, and voila!

You've just disconnected a whole bunch of consumers - many of whom might have been in the middle of a transaction (you know, trying to give you money).

Needless to say, they aren't happy.

That's because many of the systems implementing elastic scale these days haven't been load balancing applications for nearly twenty years and didn't recognize the importance of graceful degradation, or quiescence.

Graceful degradation was used in the olden days (and still is today, to be fair) when initiating a maintenance cycle. The requirement was that a server (today an instance) needed to be shut down for maintenance but it could not disrupt current user sessions. The load balancer would be "told" to begin quiescing (or bleeding off) connections in preparation for downtime. The service would immediately stop sending new connections to that server (instance) while allowing existing connections to complete. In this way, consumers could complete their tasks and when no connections were left, the server would be shut down and maintenance could begin.

This is not a simple thing to achieve. The load balancing service must be intelligent enough to stop sending new connections to an instance but not interrupt existing connections. It must be able to manage both active and semi-active instances for the same application, which requires stateful management across all application instances.

In today's models, this means that elasticity must embrace a more graceful, elegant means of scaling down than just suddenly killing an instance and all its associated connections.

With more and more apps - both those aimed at employees and those designed for consumers - residing in cloud environments and managed service environments, it is critical to evaluate how such providers support elasticity. Methods that disrupt productivity or interrupt consumer transactions are hardly worth the few pennies saved by immediately shutting down an instance when a threshold is crossed.

It behooves emerging software-defined models and devops that plan on managing scale automatically to recognize that elasticity isn't just about responding to thresholds; it's about responding seamlessly - in both directions.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories from DevOps Journal
Achieve continuous delivery of applications by leveraging ElasticBox and Jenkins. In his session at DevOps Summit, Monish Sharma, VP of Customer Success at ElasticBox, will demonstrate how you can achieve the following using ElasticBox and the ElasticBox Jenkins Plugin: Create consistency across dev, staging, and production environments Continuous delivery across multiple clouds to handle high loads Ensure consistent policy management across environments: tagging, admin boxes, traceability Spin up machines and environments quickly Deploy applications to any cloud Enable real-time collaboration between developers and operations
WaveMaker CEO Samir Ghosh is taking a new pass at aPaas, and leveraging the increasingly popular Docker open-source platform, with the announcement of WaveMaker Enterprise. The new version of the company's eponymous software “enables instant, end-to-end custom web app creation and management by professional and non-professional developers (alike) and development teams,” according to the company. We asked Samir a few questions about this, and here's what he had to say: Cloud Computing Journal: You've mentioned the previous challenge of business-side developers making that jump from design to deployment. What sort of learning curve will they still face with Wavemaker Enterprise? Samir Ghosh: “Business-side developers” can include non-programming business users or professional developers under tight schedules or with limited mobile or front-end programming expertise. Both can use WaveMaker to meet their app development needs, but may have different deployment needs. I think business users just want their app to run as easily as possible. In WaveMaker, they can literally click a button and their application will run, either on our public cloud or on the enterprise’s private...
The old monolithic style of building enterprise applications just isn't cutting it any more. It results in applications and teams both that are complex, inefficient, and inflexible, with considerable communication overhead and long change cycles. Microservices architectures, while they've been around for a while, are now gaining serious traction with software organizations, and for good reasons: they enable small targeted teams, rapid continuous deployment, independent updates, true polyglot languages and persistence layers, and a host of other benefits. But truly adopting a microservices architecture requires dramatic changes across the entire organization, and a DevOps culture is absolutely essential.
Docker offers a new, lightweight approach to application portability. Applications are shipped using a common container format and managed with a high-level API. Their processes run within isolated namespaces that abstract the operating environment independently of the distribution, versions, network setup, and other details of this environment. This "containerization" has often been nicknamed "the new virtualization." But containers are more than lightweight virtual machines. Beyond their smaller footprint, shorter boot times, and higher consolidation factors, they also bring a lot of new features and use cases that were not possible with classical virtual machines.
Leysin American School is an exclusive, private boarding school located in Leysin, Switzerland. Leysin selected an OpenStack-powered, private cloud as a service to manage multiple applications and provide development environments for students across the institution. Seeking to meet rigid data sovereignty and data integrity requirements while offering flexible, on-demand cloud resources to users, Leysin identified OpenStack as the clear choice to round out the school's cloud strategy. Additionally, the school sought a partner to provide OpenStack infrastructure deployment and operations expertise. They ultimately selected Blue Box’s Private Cloud as a Service, powered by OpenStack, leveraging Blue Box's Zurich, Switzerland data center.
In a world of ever-accelerating business cycles and fast-changing client expectations, the cloud increasingly serves as a growth engine and a path to new business models. Dynamic clouds enable businesses to continuously reinvent themselves, adapting their business processes, their service and software delivery and their operations to achieve speed-to-market and quick response to customer feedback. As the cloud evolves, the industry has multiple competing cloud technologies, offering on-premises and off-premises cloud platforms for both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). In parallel, cloud standards are also evolving, including community standards like OpenStack and CloudFoundry. Most organizations who are adopting the Cloud today are ending up adopting it in complex ‘dynamic-hybrid’ environments. There is physical infrastructure that now co-exists along with the new dynamic-hybrid on-premises and off-premises Cloud hosted environments.
High performing enterprise Software Quality Assurance (SQA) teams validate systems are ready for use – getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time – bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of Enterprise SQA? Does the SQA function become redundant?
This story came in from Joseph – one of our fellow dynaTrace users and a performance engineer at a large fleet management service company. Their fleet management software runs on .NET, is developed in-house, is load tested with JMeter and monitored in Production with dynaTrace. A usage and configuration change of their dependency injection library turned out to dramatically impact CPU and memory usage while not yet impacting end user experience. Lessons learned: resource usage monitoring is as important as response time and throughput. On Wednesday, July 3, Joseph’s ops team deployed the latest version into their production environment. Load (=throughput) and response time are two key application health measures the application owner team has on their production dashboards.
The recent trends like cloud computing, social, mobile and Internet of Things are forcing enterprises to modernize in order to compete in the competitive globalized markets. However, enterprises are approaching newer technologies with a more silo-ed way, gaining only sub optimal benefits. The Modern Enterprise model is presented as a newer way to think of enterprise IT, which takes a more holistic approach to embracing modern technologies. This model makes use of Composable Enterprise framework put forward by Jonathan Murray of WMG.
Software development, like manufacturing, is a craft that requires the application of creative approaches to solve problems given a wide range of constraints. However, while engineering design may be craftwork, the production of most designed objects relies on a standardized and automated manufacturing process. By contrast, much of moving an application from prototype to production and, indeed, maintaining the application through its lifecycle has often remained craftwork. In his session at DevOps Summit, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss the many lessons and processes that DevOps can learn from manufacturing and the assembly line-like tools, such as Platform-as-a-Service, that provide the necessary abstraction and automation to make industrialized DevOps possible.