Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Article

Continuous Performance Validation | @CloudExpo #DevOps #IoT #BigData

Today, user experience is crucial to an application's success

Achieving Continuous Performance Validation with Synthetic Users

In most modern development groups, there's a big focus on creating code that works. The idea goes that in more traditional, waterfall approaches, errors in code that aren't fixed until late in the cycle are much more expensive to resolve due to unintended consequences. Today, some groups practice test-driven development as a way to ensure that code is always functional while others work with short agile sprints where all code produced must be usable in the field by the end of each sprint.

There's a common understanding about what it means for a coding task to be "done." Yet, often this "doneness" is only a measure of functionality - not necessarily usability.

Today, user experience is crucial to an application's success, and that goes well beyond what color your button is or how prominently a call-to-action is placed. Users leave your site if pages don't load fast enough or if the site simply feels sluggish when compared with your competitors' sites.

The question is: is there a process for validating that your app is performing up to your customers' expectations and have that process treat performance testing with the same amount of rigor as functional testing?

Yes. It's called Continuous Performance Validation.

Defining Continuous Performance Validation
Continuous Performance Validation is the process of continuously testing, monitoring and improving performance at every stage of the application development lifecycle, from development to production, utilizing automated and collaborative tooling.

The heart of this process is a set of performance test scenarios that sit alongside more traditional functional unit tests. These tests, in conjunction with performance SLAs that can be placed on the task board as inputs to the development process itself, give you a library of performance unit tests that you can mix and match in a number of interesting ways.

As the building blocks of Continuous Performance Validation, these elements let you form long chains of specific behavior to use as test scenarios, mimicking how users actually use your app. These transactions should be used in every phase of the application development lifecycle - development, pre-production, and production - to proactively test and monitor performance across environments.

Before code is released, these scenarios can be pieced together to analyze how load and stress impact your app in different ways. That information can be fed to Operations for infrastructure and capacity planning purposes. Meanwhile, in production, those same scenarios can be executed to see if performance trends in reality match what was predicted in test. Problems can be identified, sent back to dev & test so that issues can be patched quickly.

Synthetic Users: The Key to Continuity
Continuous Performance Validation depends on the idea of creating synthetic users that can move through the app just like real users. Throughout the synthetic users' journeys, they gather metrics on performance and the end user experience.

As part of your continuous integration process, you can continually validate the overall performance of the system by building automated scenarios and plugging them into various stages of the dev process, executed by synthetic users.

In development, synthetic users execute transactions through unit tests that can validate localized performance requirements of key systems. They can fulfill a pre-planned test requirement, similar to TDD, and they can protect against regressions as code changes.

In pre-production, synthetic users can be thrown out from the cloud for truly distributed and georealistic load testing across the entire system. These users traverse various paths through the application and help to define the scalability and geographical contours of your application's distributed user base.

In production, synthetic users operate alongside real users. They navigate the system and exercise the same user paths you have already tested. But instead of sending out millions of them, you only need to send out a few, which gather metrics about performance and availability at particular moments in time and tell you if there is going to be a problem. Synthetic users validate that your performance is up to par and give you insight as to where it is not.

Conclusion
Continuous Performance Validation is a powerful process for apps that need to perform. By putting it in place, you won't be treating performance as an afterthought. Performance testing will be directly integrated into your standard process of development, making performance part of everyone's job description and everyone's responsibility.

Want to learn more? Check out our webinar on Continuous Performance Validation in Agile Development.

Photo Credit: Bicentennial Man (1999) from 1492 Pictures, Columbia Pictures Corporation, Laurence Mark Productions

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

@DevOpsSummit Stories
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.
"DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at @DevOpsSUMMIT and CloudEXPO tell the world how they can leverage this emerging disruptive trend."