Welcome!

@DevOpsSummit Authors: Liz McMillan, Elizabeth White, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Post

Why Software Testers Should Strive to Go Unnoticed | @DevOpsSummit #DevOps

Flying under the radar

Ah, the plight of software testers - destined to crank away behind the scenes ensuring all runs smoothly. Unlike musical conductors who command the spotlight while overseeing the performance of a piece, testers only seem to attract attention when things go terribly wrong.

The same can be said for most IT professionals, at least according to Reddit. In December, this question was posed to the AskReddit community:

"What is a job when done right no one notices, but the moment its [sic] done wrong all eyes are on you?"

Second only to a response about bass players, a comment citing IT professionals ranked highest with more than 5,500 upvotes. I'm sure many testers can lament with the commenter, who added, "When things work well, it's because of the glorious vision of Operations and Marketing. When things don't, it's because we're plagued by IT glitches that hold back the glorious vision of Operations and Marketing."

Okay, so life isn't fair and software testers seem to have drawn the short stick, at least in regards to the blame game. When users notice application performance issues, the fallout may prove detrimental to the success of not only the application, but also the organization as a whole. To prevent reoccurring performance problems, understanding what went wrong and why is critical. Failing to adjust testing strategies accordingly will only lead to unhappy users, unhappy managers, and ultimately an unhappy you.

When Users Notice
Slowly but surely, organizations are realizing the monumental importance of validating application performance. In the same vein, web and mobile application users are becoming less patient, opting to visit a competing site if pages either fail to load or take too long to load. How long is too long?

According to surveys of online shopping behavior, 47% of consumers expect a web page to load in two seconds or less and 40% will abandon a website that takes more than three seconds to load.

Three seconds.

A lot could happen in three seconds, or nothing could happen in three seconds – it simply depends on how long it takes your application to load. BUT WAIT, THERE'S MORE! Think three seconds is nothing? Check out what can happen in one or less:

  • For Bing, a one-second delay results in a 2.8% drop in revenue
  • For the Aberdeen Group, a one-second delay resulted in 11% fewer page views, a 16% decrease in customer satisfaction, and 7% loss in conversions
  • For Amazon, every 100ms increase in load time results in a 1% decrease in revenue

When a company faces site outages and poor performance, the finger pointing begins - usually at software testers - whom many believe to be at fault. After the QA team is forcibly dragged into the spotlight, testers must then turn to the task at hand: discovering what went wrong and why.

What Went Wrong?
Application crashes and slow load times can be attributed to a number of factors including but not limited to software configuration issues for web servers, databases, or load balancers, poor network configuration, poorly optimized software code, for example, code that does not allow for concurrent access, and insufficient hardware resources or lack of auto-scaling elastic computing. Obviously, these areas should receive immediate attention after end users experience poor performance.

If a thorough analysis of all of the above returns no red flags, you may be faced with one of several other common performance problems that you can explore in detail here.

If your team is utilizing a synthetic user monitoring tool, it should be able to access all the synthetic user data inside a browser to provide you with a rich set of dashboards. A deep dive into this data should yield insight into common and/or complicated user paths and the experiences synthetic users are having within said paths. With a synthetic monitoring system, you'll be able to identify performance issues before real users encounter them on your production environment - the ultimate safeguard against spotlight-inducing performance problems.

Preventative Measures
It's not enough to merely identify and fix a performance issue after the fact. Software testers should use this information to create a list of action items designed to prevent future crashes and/or poor application performance.

After you understand the core issue, it may be tempting to stop at the initial, immediate solution. However, installing more storage after determining the problem was disk space still won't explain why you incorrectly estimated how much you needed in the first place. Keep this thought process in mind and you'll be more likely to discover the precise solution.

After determining the solution, look for ways to recreate the issue in a controlled way. For example, use a load testing tool to simulate a large number of users stressing the system through a variety of realistic testing scenarios. Focus on scenarios that have been known to cause problems in the past, are new and untested, are likely to produce bottlenecks, involve complex transactions, and/or are critical paths for users.

Scenarios like these are crucial for discovering how your application reacts with abnormal amounts of load. By simulating user performance, you'll be able to understand how tasks and transactions will function for your real users.

Flying Under Users' Radar
If testers are able to successfully ensure the performance of a web or mobile application, users should remain unaware of their involvement. Sure, software testers may never be famous in the eyes of the public, but when it comes working in QA, going unnoticed is, more often than not, a good thing.

Testers that go unnoticed deliver high quality applications designed to provide exceptional performance across different browsers, devices, locations and more. Throughout a career, most testers will have to face the spotlight in the aftermath of application crashes or performance problems - but if you can learn from mistakes, invest in the right tools and initiate preventative measures, you'll be able to not only, for the most part, remain out of the spotlight, but you'll also be able to handle the heat when the spotlight becomes unavoidable.

More Stories By Tim Hinds

Tim Hinds is the Product Marketing Manager for NeoLoad at Neotys. He has a background in Agile software development, Scrum, Kanban, Continuous Integration, Continuous Delivery, and Continuous Testing practices.

Previously, Tim was Product Marketing Manager at AccuRev, a company acquired by Micro Focus, where he worked with software configuration management, issue tracking, Agile project management, continuous integration, workflow automation, and distributed version control systems.

@DevOpsSummit Stories
Wooed by the promise of faster innovation, lower TCO, and greater agility, businesses of every shape and size have embraced the cloud at every layer of the IT stack – from apps to file sharing to infrastructure. The typical organization currently uses more than a dozen sanctioned cloud apps and will shift more than half of all workloads to the cloud by 2018. Such cloud investments have delivered measurable benefits. But they’ve also resulted in some unintended side-effects: complexity and risk. End users now struggle to navigate multiple environments with varying degrees of performance. Companies are unclear on the security of their data and network access. And IT squads are overwhelmed trying to monitor and manage it all.
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power the organization's key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the "Big Data MBA" course. Bill was ranked as #15 Big Data Influencer by Onalytica. Bill has over three decades of experience in data warehousing, BI and analytics. He authored E...