Welcome!

@DevOpsSummit Authors: Pat Romanski, Liz McMillan, Elizabeth White, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, @CloudExpo

@DevOpsSummit: Blog Post

Continuous Delivery 101 By @Dynatrace | @DevOpsSummit [#DevOps]

Continuous Delivery embraces automated deployments in various stages of the software delivery process

Continuous Delivery 101: Automated Deployments

The ability to automatically and reliably deploy entire application runtime environments is a key factor to optimizing the average time it requires to take features from idea to the hands of your (paying) customers. This minimization of feature cycle time or feature lead time is, after all, the primary goal of Continuous Delivery. Today, I will introduce you to the whys and wherefores of deployment automation and discuss its importance for driving the adoption of DevOps.

The Power of Automated Deployments
Production servers can be described as "grown works of art": An unknown number of commands have been applied by various people over time, often in an ad-hoc manner, and written documentation is commonly outdated, if at all present.

Due to the inability to fully understand and entirely reproduce such environments, companies periodically create full server backups to be prepared against server loss. Still, restoring an environment from backups may consume considerable resources with uncertain success (how frequently do you verify your backup strategy and assess the mean time to recover?) and can by no means be compared with the flexibility and agility of being able to rapidly and reliably re-create any environment on demand - from development to production:

"Enable the reconstruction of the business from nothing but a source code repository, an application data backup, and bare metal resources."
- Jesse Robbins, CEO of Opscode

Continuous Delivery embraces automated deployments in various stages of the software delivery process and identifies manual deployments as one of the common release anti-patterns. It demands to let computers do what computers do best. Therefore, over time, all deployments should be fully automated to make releasing software a repeatable and reliable push-button activity:

  • Manual deployments are not consistent across environments
  • Manual deployments are slow, neither repeatable nor reliable
  • Manual deployments require extensive documentation (often outdated)
  • Manual deployments hinder collaboration (conducted by a single expert)

In combination with agile development practices and a sound suite of automated tests, deployment automation allows you to minimize feature cycle time or feature lead time by reducing waste. No doubt, time-to-market is important if not vital: you will not know whether your users will adopt a new feature before you actually let them try it out, and the sooner you deliver the earlier you can start to make money out of it:

Create value quickly - Deployment Automation helps minimize your Feature Cycle Time

Another common anti-pattern is "deploying to a production-like environment only after development is complete." This means there can be almost no confidence that a particular software release will work successfully for end users if it has never been tested in a copy of production.

We will now examine how deployment automation fits into the concept of a Continuous Delivery deployment pipeline because it lays an important foundation for establishing a high degree of confidence through the automated provisioning of production-like environments.

Deployment Automation in the Continuous Delivery Deployment Pipeline
As Jez Humble and Dave Farley put it in their standard book "Continuous Delivery," "A deployment pipeline is, in essence, an automated implementation of your application's build, deploy, test and release process."

The pipeline concept is in analogy to Lean Manufacturing, where a production line is stopped whenever a defect is detected along its way. The problem is then addressed with immediate and utmost attention and corrective measures are taken to minimize the possibility of making the same mistake ever again (Continuous Improvement) - a principle that is also important in Agile Software Development.

An exemplary Continuous Delivery Deployment Pipeline

Here is what a deployment pipeline could look like for your application. In general, the process is initiated whenever a developer commits code to a software repository inside the version control system (VCS) such as Subversion or Git. When the build automation server, which acts as the pipeline's control center, such as Jenkins, observes a change in the repository, it triggers a sequence of stages that exercise a build from different angles via automated tests, but terminates immediately in case of failure. Only when a build passes all stages it is regarded to be of sufficient quality to be released into production.

Stage #1: Commit Stage
In the commit stage, the build automation server checks out the software repository into a temporary directory, compiles the code and executes any quick running, environment-agnostic tests (mostly unit tests). After that, release artifacts such as installer packages are assembled and documentation is generated.

Stage #2: Acceptance Test Stage
In the acceptance test stage, deployment automation scripts (executable application runtime environmental specifications) are executed to create an environment that is dedicated to the remainder of this stage and that is highly similar, not necessarily identical, to the production environment. Release artifacts are deployed and smoke tests are executed to verify that the application and related services are up and running. Finally, automated acceptance tests are run to verify the application at the business level.

Stage #3: Capacity Test Stage
Similar to stage two, but automated capacity tests, typically load tests, are run to verify that the system can serve a defined level of service under production-like load conditions. Here, it is assessed whether the application is fit for purpose from a non-functional perspective, mostly in regards to response time and throughput.

For stage 4 & 5, and for further insight, click here for the full article.

More Stories By Martin Etmajer

Leveraging his outstanding technical skills as a lead software engineer, Martin Etmajer has been a key contributor to a number of large-scale systems across a range of industries. He is as passionate about great software as he is about applying Lean Startup principles to the development of products that customers love.

Martin is a life-long learner who frequently speaks at international conferences and meet-ups. When not spending time with family, he enjoys swimming and Yoga. He holds a master's degree in Computer Engineering from the Vienna University of Technology, Austria, with a focus on dependable distributed real-time systems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be looking at some significant engineering investment. On-demand, serverless computing enables developers to try out a fleet of devices on IoT gateways with ease. With a sensor simulator built on top of AWS Lambda, it’s possible to elastically generate device sensors that report their state to the cloud.