Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog, Dana Gardner

Related Topics: @DevOpsSummit, Java IoT, @CloudExpo

@DevOpsSummit: Blog Post

How to Keep Test Cases in Sync By @Neotys | @DevOpsSummit [#DevOps]

With all the rapid change that happens it's important to make sure the entire team is working off the same footing

How to Keep Test Cases in Sync Between QA and Production

The art of software development is being radically transformed by the Agile Development methodology and the DevOps culture. Strong teams emphasize collaboration, and a focus on pushing code through to customers in real-time is proving to result in a real boost to productivity. But there is perhaps no metric more impacted by a successful Agile practice than software quality.

Agile affects quality in more ways than just what the end-user sees. In fact, Agile ensures quality across the entire development process. It allows engineers working on specific modules to get feedback from live production users. Operational monitoring can be triggered off issues identified in QA. Automated testing results can be fed directly back into engineering. More than ever, it's important to keep all these organizations in sync.

This post will examine one particular aspect of that challenge: keeping QA and Operations in lockstep. With all the rapid change that happens - particularly related to new tests being developed for new features and new versions of the app rolling out live into production - it's more important than ever to make sure the entire team is working off the same footing.

The Importance of Simulated Users
Simulated users are one of the most useful and important tools we have to keep QA and Operations synced up. They are used at scale in the context of load and performance testing prior to a software release to put software through the paces of heavy stress. They also are used in a production environment to monitor site performance without impacting real users.

Putting simulated users to work effectively will, in many cases, actually push the operations and development teams closer together to meet and discuss. The data generated by simulated users allows each team to get a clearer picture of the other's performance characteristics - information that they otherwise probably don't know. Simulated users will also allow the teams to be far more proactive in their problem solving efforts by identifying issues before real people experience them.

Simulated Users Gone Wrong
The scenarios you run simulated users through can be a source of trouble if not properly handled. At best, old scenarios don't exercise the appropriate aspects of new software releases - at worst, old tests break new releases.

In order to fully understand the problems that can arise from a mismatch in your test and production environments, we can learn from the experience of Brad Stoner in a previous interview with Neotys. His story All About The Cookies describes one scenario where a traffic spike caused a major site to malfunction, even though the company had done extensive load testing beforehand. The problem was traced to a mismatch between Production and QA environments caused by an inconsistent use of cookies between environments.

Consistency is so important, and your simulated users can play an important role in identifying risks before one of the following occurs:

  • Your site goes down because your testing environment didn't mimic your production environment, which means testing was irrelevant in the first place
  • You aren't monitoring a crucial user path, so real users experience problems that you don't know about it until it's too late
  • Your system experiences bottlenecks in a number of places around the software, bringing the whole site to a halt
  • It becomes hard to troubleshoot as the QA and Operations teams struggle to communicate over a shared collection of data.

Best Methods if Keeping in Sync
It is important to keep your testing scenarios in sync and there are multiple ways to do so. Below are a few.

Automated script tagging. You can set up automated processes for tagging scripts whenever they are created, updated, redesigned, fixed or cleaned out. This can eliminate confusion around the ownership of procedures. An automated system keeps everyone looking at the same information.

Common testing dashboard. It is also important to establish a common testing dashboard that spans across load testing in production and simulated user testing. This reveals information from both pre-release and post-release systems and helps bring the QA & Production teams together.

Regular meetings. Regular joint meetings and reviews should be held and performance data should be discussed between both the QA and operations teams to increase clarity of important issues.

Process QA. Designate a QA specialist to observe and improve quality across the entire process from development all the way to the production environment, establishing a robust Testing-In-Production practice.

Automation in Operations. Designate an Operations specialist to be responsible for ensuring that automated testing and deployment is taking place without any problems.

It is crucial to give both teams objectives that are related to operational support and quality. Lastly, leverage technology that makes it easy to stay in sync, like working off a platform that shares test scenario libraries between simulated testing and load testing. A few of our products here at Neotys (NeoLoad and NeoSense) will help you test in this fashion.

Test Well, Test Often
Rapid software development affects everyone across the organization. Not only do all teams have to be ready, but it is necessary to leverage collaboration and tools to ease communication, share information, delegate accountability, improve upon each other's work and stay in sync. We must remember that performance and load testing are crucial to ensure that code quality remains high - but even more important is to invest in processes that make sure the quality of the testing environment is high. Happy testing!

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.