Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, Liz McMillan, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

How to Boost Enterprise Software Testing Predictability | @DevOpsSummit #DevOps #ContinuousTesting

The foundation of enterprise software testing predictability is built on consistent practices, processes, and tools

How to Boost Enterprise Software Testing Predictability
By Simon King

Are you tired of slipping deadlines and missing commitments? Do you feel like the consistency of Software Testing is out of your control? Here is how to improve the predictability of software testing across multiple teams and projects.

Enable Consistent Practices, Processes, and Tools
The foundation of enterprise software testing predictability is built on consistent practices, processes, and tools. Improving the predictability of software testing for a single team is difficult; doing so across multiple enterprise teams working on multiple features is exponentially more difficult.

Software testing predictability relies on the consistency of requirements management, code quality, test management, defects management, test environment management, and much more. The teams that support these activities have the largest impact on their predictability.

If multiple enterprise teams have inconsistent practices, processes, or tools, then these activities will be inconsistent. In order to enable predictable software testing, you have to enable your teams to have consistent practices, processes, and tools.

Standardizing Across Multiple Development Methodologies
It is easy to say that you should ensure everyone is using consistent practices, processes, and tools, but in the enterprise, the reality is that teams will be different. Some development teams may be using Agile practices, while others may be following Waterfall practices. Some of the Agile teams may be practicing Kanban while others are SCRUM.

From a tooling standpoint, different development teams may be using Rally, JIRA, spreadsheets, or Post-it Notes. While standardizing these methodologies and tools is clearly optimal, it may be unrealistic to expect Software Testing to drive those changes.

It is essential that software testing practices, processes, and tools support multiple development methodologies and tools. Support the testing at the team level that makes the teams most productive, and enable the management above the team level to be consistent.

This may sound theoretical, but I assure you it is pragmatic. Empowering teams to do work the way that is best for them while standardizing the management above those teams is exactly what methodologies like the Scaled Agile Framework (SAFe) and enterprise ALM platforms have done for development.

Your organization doesn’t have to be committed to SAFe to use its guidance for improving software testing management consistency. You also don’t have to use guidance from SAFe or any other methodology. You can determine which consistent practices you want to support. Modern enterprise test management tools, just like ALM tools, also help you standardize the management of multiple teams, methodologies, and tools.

Measure Software Testing Predictability
You must measure software testing predictability if you hope to improve it. No one has a simple answer or quick method for improving predictability across all these facets. The only way to start improving predictability across a complex system is to measure it, identify the biggest barriers, and resolve those barriers. Then rinse and repeat until you are executing within a suitable threshold.

It is key in the enterprise to be able to roll up individual team progress across different dimensions such as project, release, feature, or portfolio. You gain better insight faster by viewing the predictability of multiple teams across a release or feature.

It is also critical that you can drill down across different dimensions such as individual, team, project, or feature. For example, if you are viewing a chart of the predictability of testing across features and you see that one feature has far more defects than the rest, you would want to drill down to see how many defects are being found across the different teams that support that feature.

When you drill down, you might see that a particular team has far more defects than the rest of the teams. You will want to drill into that team to determine what the issue is. Now that you have pinpointed where the biggest issues are, you should investigate what is causing the issues and how you can resolve the situation. Over time your predictability will stabilize.

Capture Consistent Historical Insight
A measurement is only as insightful as the context that is provided alongside it. What if I tell you the temperature is eighty-eight degrees? Am I talking about my temperature, my six-month-old daughter’s temperature, or the outside temperature? If I’m talking about the outside temperature, where am I and what time of year is it? What is the historical temperature for this place and time of year? Finally, how reliable is my measurement? Is it coming from the weather service or an old thermometer sitting in the shade of my house?

To measure software testing predictability you have to have reliable measurements across all teams, features, releases, and portfolios throughout a significant period of time. This includes collecting the number of requirements, defects, fixes, and more. You also need to be able to slice this information by teams, features, releases, and portfolios, so that you can know how many defects a team typically finds.

Measuring Predictability
There are three aspects to measuring enterprise software testing predictability:

1. Throughput
To determine your software testing predictability, you need to know your throughput of defects found by phase, fixes by phase, and bugs injected into production. You also need to be able to view these items in relation to the number of requirements. Once again, you need to be able to slice the information by teams, features, releases, and portfolios.

2. Throughput Variation
Next, you want to look at the historical trends of the throughput variation. The more consistent your throughput values are month-over-month, the higher your predictability is.

3. Single Source of Truth
This type of reporting is next to impossible if you are using spreadsheets for software testing reporting or older test management tools. You need a central repository for storing testing data. A modern enterprise test management tool will enable you to collect, store, and analyze these types of measures.

Conclusion
Missing deadlines and commitments is stressful. Boosting your software testing predictability is paramount for your team’s success and sanity. As crucial as predictability is, it is just one component of improving your overall enterprise software testing performance. Check out this webinar to learn how to improve enterprise software testing.

The post How to Boost Enterprise Software Testing Predictability appeared first on Plutora.

Read the original blog entry...

More Stories By Plutora Blog

Plutora provides Enterprise Release and Test Environment Management SaaS solutions aligning process, technology, and information to solve release orchestration challenges for the enterprise.

Plutora’s SaaS solution enables organizations to model release management and test environment management activities as a bridge between agile project teams and an enterprise’s ITSM initiatives. Using Plutora, you can orchestrate parallel releases from several independent DevOps groups all while giving your executives as well as change management specialists insight into overall risk.

Supporting the largest releases for the largest organizations throughout North America, EMEA, and Asia Pacific, Plutora provides proof that large companies can adopt DevOps while managing the risks that come with wider adoption of self-service and agile software development in the enterprise. Aligning process, technology, and information to solve increasingly complex release orchestration challenges, this Gartner “Cool Vendor in IT DevOps” upgrades the enterprise release management from spreadsheets, meetings, and email to an integrated dashboard giving release managers insight and control over large software releases.

@DevOpsSummit Stories
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. His expertise is in automating deployment, management, and problem resolution in these environments, allowing his teams to run large transactional applications with high availability and the speed the consumer demands.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.