Welcome!

@DevOpsSummit Authors: Elizabeth White, Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski

Related Topics: Java IoT, Linux Containers, Containers Expo Blog, @DevOpsSummit

Java IoT: Blog Feed Post

7 Spices of #ContinuousDelivery Pipeline | @DevOpsSummit #CD #DevOps

A Continuous Delivery pipeline as part of an Agile transformation is like spices in a meal

The Seven Spices of a Continuous Delivery Pipeline
By Andreas Prins

A Continuous Delivery pipeline as part of an Agile transformation is like spices in a meal. Without them, the food is bland and worthless. On the other hand, the right blend of spices will leave you craving more, stimulating your senses and energizing you. But as any good cook will tell you, it can be a bit difficult to find the exact right blend of spices for a specific dish.

Salt and pepper are usually basic requirements, but knowing how to give your creation a boost by adding more complex spices, like turmeric, star anise, ginger or coriander, is a little trickier. It requires selecting the spices with care to make the dish tasteful—and that’s exactly like choosing tools for your CD pipeline and building your pipeline up. In short, creating a Continuous Delivery pipeline is not like using a standard set of spices you store in your kitchen drawer. You must carefully choose your tools per the goals of your team.

Want to become the master chef of your Continuous Delivery pipeline? Here are 7 tips:

  1. Avoid creating monoliths
  2. Strike a balance between fixed and flexible components
  3. Treat your CD Pipeline as a value stream, not a bunch of tech tools
  4. Use the MVP approach to build and extend your pipeline
  5. Embrace a model that allows you to easily experiment
  6. Limit the number of homegrown solutions you build—don’t miss out on all the great tools already on the market
  7. Setup your CD pipeline like it’s your most critical piece of software

Avoid creating monoliths
Monolithic applications can be a useful part of backend operations. But as you increase your customer-facing applications and interactions, monoliths become difficult to handle, impeding and restricting agility. Keep in mind that most of the changes you’ll experience over the next few months and years will probably affect the monoliths. Tools grow and go, new frameworks force you to adapt, and compliance is no longer a department, but more like oxygen—it’s everywhere and crucial to surviving. You need to accept that, in increasingly agile environments, monoliths are a thing of the past, and should be avoided as you create your CD pipeline.

Strike a balance between fixed and flexible components
Every IT team in every organization whether, insurance, government or banking, faces mandatory requirements. My advice is to set up a pipeline with a dual focus.

First, create fixed processes for addressing elements of the pipeline that are mandatory for getting software into production. Examples include version control, 4-eye principle, peer review, secure code review and user access management. Think of this as the salt and pepper of your delivery process—it’s the foundation of a tasty dish.

Next, make all non-mandatory processes flexible by using a modular approach. Allow teams to choose from a set of OP tools that can be easily attached to the pipeline. Not every team, for example, requires the same performance test tool. Depending on the technology, type, moment and maturity of testing, you could give let them select from, say, a set of five tools. This gives them freedom of choice while allowing you to maintain control and reduce maintenance.

Treat your CD Pipeline as a value stream, not a bunch of tech tools
CD pipelines require care and feeding to keep them running at peak efficiency. You can’t just pull together a bunch of tools and watch your pipeline magically transform. Your pipeline is better seen as a value stream that allows you to visualize your release process, understanding its throughput times, identifying bottlenecks and so on, so you can continually optimize your delivery cycle. The shorter a release is from “Merge to master” to “Deployment in production,” the faster feedback will flow throughout your organization, and the better your ability to respond quickly to last-minute demands.

Use the MVP approach to building and extending the pipeline
Building a CD pipeline is hard work. Don’t expect to create it overnight or to onboard teams in the blink of an eye. Start small and grow with the maturity of the team. It’s like learning to cook great meals. You might start by buying some pre-packaged seasonings from the supermarket and adding water. The more knowledgeable you become, the better you get at picking your own spices, which gives you the confidence to start experimenting. Why? Because you come to understand the subtleties of flavor and the effect of certain combinations.

Embrace a model that allows for experimentation
Speaking of experimenting, if you want to become an Agile organization, your CD pipeline should be flexible enough that you can try new things. Let’s be honest, tools in this field are like Roman emperors: they rise, shine and fall, so you must be able to experiment without breaking things. For example, if you balance your pipeline between fixed and flexible components as suggested above, you could try using multiple Docker containers without destabilizing the pipeline.

Limit the number of homegrown solutions you build
With all the great tools on the market, there’s no need to start building your own CD tools. You can create robust pipelines that fit into the above principles using off-the-shelf tools.

Setup your CD Pipeline like it’s your most critical software
From time to time I still hear people say that, when faced with urgent business demands, they skip over the pipeline and place software directly into production. The argument is that the pipeline is simply too slow to meet the demand. Here’s what I suggest:

  • Make your pipeline stable enough to work in every critical situation
  • Ensure that the pipeline extends from end-to-end so you can optimize all activities towards getting working software into production
  • Make the pipeline fast enough so you’re not tempted to do everything manually

Achieving availability, integrity, reliability, and speed is definitely hard work. But if you structure your pipeline as suggested with both fixed and flexible parts, it will be a lot easier to start small and grow into a highly efficient Continuous Delivery pipeline.

About the author
Andreas Prins is facilitator and manager of several DevOps teams. He loves to think and write about topics like transforming organizations, coaching teams and speeding up the delivery process. You can read his personal blog at IdeeTotIT.nl (Dutch).

The post The 7 Spices of a Continuous Delivery Pipeline appeared first on XebiaLabs.

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

@DevOpsSummit Stories
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO Silicon Valley 2019 will cover all of these tools, with the most comprehensive program and with 222 rockstar speakers throughout our industry presenting 22 Keynotes and General Sessions, 250 Breakout Sessions along 10 Tracks, as well as our signature Power Panels. Our Expo Floor will bring together the leading global 200 companies throughout the world of Cloud Computing, DevOps, IoT, Smart Cities, FinTech, Digital Transformation, and all they entail. As your enterprise creates a vision and strategy that enables you to create your own unique, long-term success, learning about all the technologies involved is essential. Companies today not only form multi-cloud and hybrid cloud architectures, but create them with built-in cognitive capabilities.
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve full cloud literacy in the enterprise world.
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, will discuss how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogger and is a frequent speaker on the use of Big Data and data science to power the organization's key business initiatives. He is a University of San Francisco School of Management (SOM) Executive Fellow where he teaches the "Big Data MBA" course. Bill was ranked as #15 Big Data Influencer by Onalytica. Bill has over three decades of experience in data warehousing, BI and analytics. He authored E...