Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Roger Strukhoff, Liz McMillan, Stackify Blog, Pat Romanski

Related Topics: @DevOpsSummit, Microservices Expo, IBM Cloud, Linux Containers, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Blog Feed Post

Understanding DevOps | @DevOpsSummit @IBMDevOps #DevOps

DevOps is a set of principles & practices that enables an organization to make their delivery of applications ‘lean’ & efficient

October 28, 2014

A simple description of DevOps is such:

‘An approach to Application Delivery that applies Lean principles to accelerate feedback and improve time to market’

What does this mean? In a nutshell it implies that DevOps is a set of principles and practices that enables an organization to make their delivery of applications ‘lean’ and efficient, while leveraging feedback from customers and users to continuous improve.

What do you ‘continuously improve’? Three things:

  1. The application being delivered
  2. The Environment of the application being delivered
  3. The process by which the application (and its environment) is delivered

The ‘continuous improvement’ of the application and it environment comes from the feedback mechanism. As the application is continuously delivered, customers or customer surrogates (if the new feature delivered cannot be made available to the customer) can use the application delivered and provide feedback on the application’s functionality and behavior. This feedback can be used to improve both the application itself and also the environment it is delivered on, in the next iteration. The application’s features can enhance, added to or removed, based on the feedback. The Environment can be enhanced or re-configured if is not enabling the application to perform as expected or unable to deliver the performance Service Level Agreements (SLAs) agreed upon.

The third area of improvement – that of improving the process of delivering the application is where the crux of DevOps lies. How does one continuous make the process of delivering the application more lean and efficient – continuously improve it.

Looking at delivery processes to continuously improve them is not a new approach. Lean Manufacturing and the Japanese manufacturing approach called Kaizen have been applied to improving factory processes for decades. DevOps is now taking these Lean approaches and applying them to Application Delivery. Agile development practices applied some of these principles to development and testing. DevOps applies them to end-to-end application delivery – from ideation to production.

Continuous Improvement – where to begin?

To begin applying lean principles to application delivery processes one first needs to identify where the ‘fat’ is that can be reduced or completely eliminated. Lean thinking leverages a technique known as ‘Value Stream Mapping’ to identify these areas of ‘fat’ or inefficiencies. While one can carry out an extensive ‘Value Stream Mapping’ exercise to analyze one’s application delivery processes in detail over a multi-week engagement with experts in the space, a simple and quick approach is to take some time to map your delivery pipeline and look for ‘bottlenecks’ in how the delivery pipeline operates. New requirements, enhancement requests and bugs to be fixed go in from one end of the delivery pipeline. Code running in production comes out from the other end. How efficiently does this pipeline operate? What bottlenecks are there which can be eliminated or at least minimized? Where is the ‘waste’ that can be reduced?

This value stream mapping identifies bottlenecks in the delivery pipeline. These bottlenecks are typically just symptoms of underlying ‘fat’ in the system. They need to be analyzed to identify root causes of the inefficiencies. This list of root causes then need to be prioritized and the top three to five identified to develop a mitigation plan. DevOps capabilities can now be applied to address them. An adoption roadmap to adopt these capabilities and the associated practices can now be developed and put in motion.

It's Continuous:

The key word in this all is ‘continuous’.

Adopting DevOps is not a step, but a journey of continuously ‘deploying improvement’ and continuously improving ones practices and culture.

Learn more:
Describing how to map out your delivery pipeline:


Check out this video of me doing a ‘mock’ value stream mapping:


Related Posts:

Understanding DevOps:

Adopting DevOps:


More Stories By Sanjeev Sharma

Sanjeev is a 20-year veteran of the software industry. For the past 18 years he has been a solution architect with Rational Software, an IBM brand. His areas of expertise include DevOps, Mobile Development and UX, Lean and Agile Transformation, Application Lifecycle Management and Software Supply Chains. He is a DevOps Thought Leader at IBM and currently leads IBM’s Worldwide Technical Sales team for DevOps. He speaks regularly at conferences and has written several papers. He is also the author of the DevOps For Dummies book.

Sanjeev has an Electrical Engineering degree from The National Institute of Technology, Kurukshetra, India and a Masters in Computer Science from Villanova University, United States. He is passionate about his family, travel, reading, Science Fiction movies and Airline Miles. He blogs about DevOps at http://bit.ly/sdarchitect and tweets as @sd_architect

@DevOpsSummit Stories
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and controlling infrastructure. The rise of Site Reliability Engineering (SRE) is part of that redefinition of operations vs development roles in organizations.
When a company wants to develop an application, it must worry about many aspects: selecting the infrastructure, building the technical stack, defining the storage strategy, configuring networks, setting up monitoring and logging, and on top of that, the company needs to worry about high availability, flexibility, scalability, data processing, machine learning, etc. Going to the cloud infrastructure can help you solving these problems to a level, but what if we have a better way to do things. As a pioneer in serverless notion, Google Cloud offers a complete platform for each of those necessities letting users to just write code, send messages, assign jobs, build models, and gain insights without deploying a single machine. So cloud compute on its own is not enough, we need to think about all of the pieces we need to move architecture from the bottom, up towards the top of the stack. Wi...