Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Java IoT, @CloudExpo

@DevOpsSummit: Blog Feed Post

Software Testing Is Too Critical By @JPMorgenthal | @DevOpsSummit [#DevOps]

Anecdotally, I'd have to say 75% of my DevOps conversations eventually centers on testing

Software Testing Is Too Critical to Overlook

Today's software testing practices are abysmal.

You're a major provider of health insurance services to general consumers. Your website is a primary means of interacting with your customers and allowing them view coverage, locate in-plan providers, review Explanation of Benefits for past services and see real-time information regarding deductibles and fees. It's midday and requests are taking over a minute or more to process.

You're the provider of one of the leading business SaaS applications on the market. Millions of business people count on your application every day to communicate with their customers, analyze sales projections and execute marketing communications plans. Access from mobile devices is a must have for these busy professionals, but the application crashes more times than it works.

What's going on in these scenarios? Damned if I know, but the problem is damned if these providers know either. The speed at which the digital world is operating is forcing businesses to deliver faster and faster usually to the detriment of not following software development and testing best practices.

Oddly enough, anecdotes like the ones presented earlier emerge as part of DevOps conversations with customers. There's an understanding that entire parts of the software development lifecycle are being short-circuited or completely skipped in an attempt to deliver at an inhuman pace. There's a semi-incorrect belief by many of these customers that DevOps is the answer. I say semi-incorrect because given a few other links in the chain being completed first, DevOps could be an answer to delivering at the speed demanded with quality.

Anecdotally, I'd have to say 75% of my DevOps conversations eventually centers on testing. If you follow a Continuous Delivery methodology this should make sense as testing is distributed across the continuum of delivery. Here are some of the subjects being addressed with regard to testing:

  • Minimizing resource contention around QA environments
  • Identifying and preparing data for testing
  • Automation of regression testing
  • Methods of isolating changes to minimize full system testing
  • Defect management
  • Non-functional testing in enterprise software environments

This is just a fraction of the issues that ultimately come up when reviewing bottlenecks and constraints that limit high-quality resilient and speedy delivery of applications and modifications. These issues are also impacted by IT organizational structure-who owns infrastructure, licensing, etc., politics, budget, time, tooling and skills. Hence, these are complex issues to be dealing with at a time when demand is increasing and time to deliver is shrinking. That said, the lack of quality will catch up with you eventually in the form of growing shadow IT, management transitions, loss of business, attrition, outsourcing and any other obtuse means users have for avoiding dealing with your systems.

What can you do? Unfortunately, there's not a single patterned answer that every business can follow to increase quality. I recommend businesses form Testing Centers of Excellence to centralize the governance of testing across the various groups involved with delivery. However, each business will only be able to absorb change to a certain degree that is predicated on time, budget and resources. If pushed to provide some direction, here's what I recommend to clients:

  • Hire or promote an individual to lead testing that has an understanding of the science of testing.
  • Move as much testing earlier in the process as possible.
  • Place as much emphasis on non-functional requirements testing as code testing.
  • Incent reduction in defects versus defect identification and correction
  • Incent zero incident releases
  • Invest in tools and training for automation

Skipping or short-circuiting testing practices due to time limitations is penny-wise and pound-foolish. Poor quality at best will only act to delay future releases while the current release is fixed and, worse case, will have long-lasting detrimental impact on the business.

Read the original blog entry...

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@DevOpsSummit Stories
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to the new world.