Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Dalibor Siroky, Stackify Blog

Related Topics: @DevOpsSummit, Java IoT, Industrial IoT, Microservices Expo, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Post

Continuous Delivery Plumbing | @DevOpsSummit #DevOps #Docker #Microservices #ContinuousDelivery

DevOps teams and Continuous Delivery processes must continue to adapt and improve

DevOps and Continuous Delivery Plumbing - Unblocking the Pipes

Jack Welch, the former CEO of GE once said - "If the rate of change on the outside is happening faster than the rate of change on the inside, the end is in sight." This rings truer than ever - especially because business success is inextricably associated with those organizations who've got really good at delivering high-quality software innovations - innovations that disrupt existing markets and carve out new ones.

Like the businesses they've helped digitally transform, DevOps teams and Continuous Delivery processes must themselves continue to adapt and improve. Demands will increase to a point where the dizzying deployments seen today are standard and routine tomorrow. Even with great culture, a plethora of tools and herculean team efforts, there will be a point where many other systemic issues impose a limit of what's actually achievable with DevOps.

One way to address this is with what I call Continuous Delivery plumbing - that is, finding every process and technology issue causing a blockage, applying automation to clear the pipes, and ultimately increasing the flow of value to customers. It sounds simple in theory, but like actual plumbing you'll need to get your hands dirty.

Any idle time is terminal - Continuous Delivery goals like faster lead times often remain elusive because of the constraints deliberately or unintentionally placed on IT. It's hard of course to counter entrenched culture and procedural over-excesses, but we continue to be plagued by problems that are well within our control to fix. These include the usual suspects of development waiting on infrastructure dependencies, manual and error-prone release processes, too many handoffs, plus of course leaving testing and monitoring too late in the lifecycle.

Tools driving process improvements have to some extent helped. Now open source nuggets like Git and Jenkins enable developers to quickly integrate code and automate builds so that problems are detected earlier. Other advanced techniques like containerization are making application portability and reusability a reality, while simulating constrained or unavailable systems allows developers, testers and performance teams to work in parallel for faster delivery.

All these (and many other) tools have a key role to play, but in the context of Continuous Delivery, we often lack the insights needed to purposefully action our considerable investments in pipeline automation - if you will, automate the automation. For example, node-based configuration management is a wonderful thing, but how much more powerful would it be if those configurations were managed in context of an actual application-level baseline during the release process. Similarly, how much time could we save if test assets were automatically generated based on dynamic performance baselines established during release cycles.

Quality inspection actually sucks - There's a lot to love about DevOps and Lean, especially the transformative thinking (ala W. Edwards Deming) on why quality should start and end with the customer. Now in the consumer-centric age, customers rate business on the quality of software interactions and how quickly these experiences can be improved and extended.

But maintaining a fluid balance of speed and quality has proved difficult with existing processes. Too often interrupt driven code inspections, QA testing and rigid compliance checks are grossly mismatched to more agile styles of development and the types of applications now being delivered. Also, many existing processes only give an indication of quality shortfalls, rather than provide teams information needed to drive quality improvements. For example, application performance management (mostly used in production) should also be established into the Continuous Delivery process itself - that'll help DevOps teams continue to find the quality "spot fires" - yes, but also build the feedback loops needed to do what's really awesome - extinguish them completely.

The bar will never be high enough - as application architecture transitions from monolithic to microservice, operational capabilities will become a critical business differentiator. With literally thousands of loosely coupled services being deployed at different rates, success will depend on managing these new platforms at scale. There are other specific challenges too. Newer dynamic microservice architectures with design for failure approaches make it increasingly difficult to build consistent development environments, which when considered with complexities surrounding, messaging and service interaction means comprehensive testing becomes much more challenging.

From a purely quantitative perspective, release automation processes can (provided they scale) solve many of these issues. However, as we continue to raise the bar, it's also important to ensure that continuous delivery leverages and fuses other processes as the means to drive improvements - for example by capturing realistic performance information before testing, cross-functional teams can establish and develop much more confidence in releases. This is much preferable to the traditional approach of monitoring only ever being used to detect problems after the proverbial horse has bolted.

Business success now hinges on the ability to constantly meet the demand for innovative, high-quality applications. But this is challenging if organizations rely on systems and processes that were only ever designed to deploy software in larger increments over longer cycles. Achieving Continuous Delivery to overcome these obstacles is a fundamental goal of DevOps. This means always ensuring the ‘pipes are unblocked' by removing constraints, improving testing efficiency, and enriching processes to increase the velocity and quality of software releases.

More Stories By Pete Waterhouse

Pete Waterhouse, Senior Strategist at CA Technologies, is a business technologist with 20+ years’ experience in development, strategy, marketing and executive management. He is a recognized thought leader, speaker and blogger – covering key trends such as DevOps, Mobility, Cloud and the Internet of Things.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today's digital transformation.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.