Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Flint Brenton, Liz McMillan, Elizabeth White, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Industrial IoT, Microservices Expo, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Post

Continuous Delivery Plumbing | @DevOpsSummit #DevOps #Docker #Microservices #ContinuousDelivery

DevOps teams and Continuous Delivery processes must continue to adapt and improve

DevOps and Continuous Delivery Plumbing - Unblocking the Pipes

Jack Welch, the former CEO of GE once said - "If the rate of change on the outside is happening faster than the rate of change on the inside, the end is in sight." This rings truer than ever - especially because business success is inextricably associated with those organizations who've got really good at delivering high-quality software innovations - innovations that disrupt existing markets and carve out new ones.

Like the businesses they've helped digitally transform, DevOps teams and Continuous Delivery processes must themselves continue to adapt and improve. Demands will increase to a point where the dizzying deployments seen today are standard and routine tomorrow. Even with great culture, a plethora of tools and herculean team efforts, there will be a point where many other systemic issues impose a limit of what's actually achievable with DevOps.

One way to address this is with what I call Continuous Delivery plumbing - that is, finding every process and technology issue causing a blockage, applying automation to clear the pipes, and ultimately increasing the flow of value to customers. It sounds simple in theory, but like actual plumbing you'll need to get your hands dirty.

Any idle time is terminal - Continuous Delivery goals like faster lead times often remain elusive because of the constraints deliberately or unintentionally placed on IT. It's hard of course to counter entrenched culture and procedural over-excesses, but we continue to be plagued by problems that are well within our control to fix. These include the usual suspects of development waiting on infrastructure dependencies, manual and error-prone release processes, too many handoffs, plus of course leaving testing and monitoring too late in the lifecycle.

Tools driving process improvements have to some extent helped. Now open source nuggets like Git and Jenkins enable developers to quickly integrate code and automate builds so that problems are detected earlier. Other advanced techniques like containerization are making application portability and reusability a reality, while simulating constrained or unavailable systems allows developers, testers and performance teams to work in parallel for faster delivery.

All these (and many other) tools have a key role to play, but in the context of Continuous Delivery, we often lack the insights needed to purposefully action our considerable investments in pipeline automation - if you will, automate the automation. For example, node-based configuration management is a wonderful thing, but how much more powerful would it be if those configurations were managed in context of an actual application-level baseline during the release process. Similarly, how much time could we save if test assets were automatically generated based on dynamic performance baselines established during release cycles.

Quality inspection actually sucks - There's a lot to love about DevOps and Lean, especially the transformative thinking (ala W. Edwards Deming) on why quality should start and end with the customer. Now in the consumer-centric age, customers rate business on the quality of software interactions and how quickly these experiences can be improved and extended.

But maintaining a fluid balance of speed and quality has proved difficult with existing processes. Too often interrupt driven code inspections, QA testing and rigid compliance checks are grossly mismatched to more agile styles of development and the types of applications now being delivered. Also, many existing processes only give an indication of quality shortfalls, rather than provide teams information needed to drive quality improvements. For example, application performance management (mostly used in production) should also be established into the Continuous Delivery process itself - that'll help DevOps teams continue to find the quality "spot fires" - yes, but also build the feedback loops needed to do what's really awesome - extinguish them completely.

The bar will never be high enough - as application architecture transitions from monolithic to microservice, operational capabilities will become a critical business differentiator. With literally thousands of loosely coupled services being deployed at different rates, success will depend on managing these new platforms at scale. There are other specific challenges too. Newer dynamic microservice architectures with design for failure approaches make it increasingly difficult to build consistent development environments, which when considered with complexities surrounding, messaging and service interaction means comprehensive testing becomes much more challenging.

From a purely quantitative perspective, release automation processes can (provided they scale) solve many of these issues. However, as we continue to raise the bar, it's also important to ensure that continuous delivery leverages and fuses other processes as the means to drive improvements - for example by capturing realistic performance information before testing, cross-functional teams can establish and develop much more confidence in releases. This is much preferable to the traditional approach of monitoring only ever being used to detect problems after the proverbial horse has bolted.

Business success now hinges on the ability to constantly meet the demand for innovative, high-quality applications. But this is challenging if organizations rely on systems and processes that were only ever designed to deploy software in larger increments over longer cycles. Achieving Continuous Delivery to overcome these obstacles is a fundamental goal of DevOps. This means always ensuring the ‘pipes are unblocked' by removing constraints, improving testing efficiency, and enriching processes to increase the velocity and quality of software releases.

More Stories By Pete Waterhouse

Pete Waterhouse, Senior Strategist at CA Technologies, is a business technologist with 20+ years’ experience in development, strategy, marketing and executive management. He is a recognized thought leader, speaker and blogger – covering key trends such as DevOps, Mobility, Cloud and the Internet of Things.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
JETRO showcased Japan Digital Transformation Pavilion at SYS-CON's 21st International Cloud Expo® at the Santa Clara Convention Center in Santa Clara, CA. The Japan External Trade Organization (JETRO) is a non-profit organization that provides business support services to companies expanding to Japan. With the support of JETRO's dedicated staff, clients can incorporate their business; receive visa, immigration, and HR support; find dedicated office space; identify local government subsidies; get tailored market studies; and more.
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a member of the Society of Information Management (SIM) Atlanta Chapter. She received a Business and Economics degree with a minor in Computer Science from St. Andrews Presbyterian University (Laurinburg, North Carolina). She resides in metro-Atlanta (Georgia).
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
It is ironic, but perhaps not unexpected, that many organizations who want the benefits of using an Agile approach to deliver software use a waterfall approach to adopting Agile practices: they form plans, they set milestones, and they measure progress by how many teams they have engaged. Old habits die hard, but like most waterfall software projects, most waterfall-style Agile adoption efforts fail to produce the results desired. The problem is that to get the results they want, they have to change their culture and cultures are very hard to change. To paraphrase Peter Drucker, "culture eats Agile for breakfast." Successful approaches are opportunistic and leverage the power of self-organization to achieve lasting change.