Welcome!

@DevOpsSummit Authors: Liz McMillan, Stackify Blog, Dana Gardner, Yeshim Deniz, Jignesh Solanki

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Machine Learning , Agile Computing

@DevOpsSummit: Blog Post

Software Quality Metrics For Continuous Delivery | @DevOpsSummit [#DevOps]

Even small changes need to be tracked and their impact on overall software quality must be measured

Software Quality Metrics for Your Continuous Delivery Pipeline | Part I

How often do you deploy new software? Once a month, once a week or every hour? The more often you deploy the smaller your changes will be. That's good! Why? Because smaller changes tend to be less risky since it's easier to keep track of what has really changed. For developers, it's certainly easier to fix something you worked on three days ago than something you wrote last summer. An analogy from a recent conference talk from AutoScout24 is to think about your release like a container ship, and every one of your changes is a container on that ship:

Your next software release en route to meet its iceberg

If all you know is that you have a problem in one of our containers you'd have to unpack and check all of them. That doesn't seem to make sense for a ship, and neither does it for a release. But that's still what happens quite frequently when a deployment fails and all you get is "it didn't work." In contrast, if you were shipping just a couple of containers you would be able to replace your giant, slow-maneuvering vessel with something faster and more agile - and if you're looking for a problem, you'd only have to inspect a handful of containers. While adopting this practice in the shipping industry would be a rather costly approach, this is exactly what continuous delivery allows us to do: Deploy more often, get faster feedback, and fix problems faster.

A great example is Amazon, who shared their success metrics at Velocity:

Some impressive stats from Amazon showing the success of rapid continuous delivery

However - even small changes can have severe impacts. Examples?

  1. Heavy DOM Manipulations through JavaScript: Introduced through a "harmless" new JavaScript library for tracking link clicks
  2. Memory Leaks in Production: Introduced by a not well tested remote logging framework downloaded on GitHub
  3. Performance Impact of Exceptions in Ops: Ops and Dev did not follow the same deployment steps (lack of automation scripts) resulting in thousands of exceptions and maxes out CPU on all app servers

Extending Your Delivery Pipeline
Even small changes need to be tracked and their impact on overall software quality must be measured along the delivery pipeline so that your quality gates can stop even the smallest change from causing a huge issue. The three examples above could have been avoided when automatically looking at the following measures across the delivery pipeline and stopping the delivery when "architectural" regressions are detected:

  • The number of DOM manipulations
  • Memory usage or object churn rate per transaction
  • Number of exceptions, number of database queries or number of log entries.

In a series of blog posts I will introduce you to metrics that you have to measure along your pipeline to act as an additional quality measure mechanism in order to prevent problems listed above. It is important that:

  • Developers get these measurements in the commit stage
  • Automation Engineers need to measure them for the automated unit and integration tests
  • Performance Engineers add them to the load testing reports you do in staging
  • Operations verify how the real application behaves after a new deployment in production

For each metric I introduce, I'll explain why it is important to monitor it, which types of problems can be detected and how Developers, Testers and Operations can monitor these metrics. To ready more on this, click here for the full article.

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts. This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.
The need for greater agility and scalability necessitated the digital transformation in the form of following equation: monolithic to microservices to serverless architecture (FaaS). To keep up with the cut-throat competition, the organisations need to update their technology stack to make software development their differentiating factor. Thus microservices architecture emerged as a potential method to provide development teams with greater flexibility and other advantages, such as the ability to deliver applications at warp speed using infrastructure as a service (IaaS) and platform as a service (PaaS) environments.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.