Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog, SDN Journal

@DevOpsSummit: Blog Post

Logging and Continuous Delivery By @FloMotlik | @DevOpsSummit [#DevOps]

Continuous Delivery is the future of building high class software

Guest blog post by Florian Motlik, Cofounder & CTO of Codeship Inc.

Why Great Logging Is Key to Continuous Delivery

Over the last years Continuous Delivery has gained a massive following with many development teams embracing the style. Companies have chosen (as with many other modern developer tools), to either build their own embrace a hosted service likeCodeship. In the end though, no matter if you go with a hosted service or roll it on your own, the goal is to move faster and build a product that your customers really love. For that you need to iterate quickly, get feedback and iterate again.

Successfully rolling out that process depends on many variables. Proper logging is one of those variables, and can be a helpful tool to remove fear.

Why great logging is key to continuous delivery

Thou shalt not be afraid
As we've moved into the age of cloud software development, it is all about team productivity. Getting started on infrastructure is virtually free, so every team plays on the same level playing field and you need to constantly increase productivity to win.

By far the biggest killer of productivity is fear of moving faster because you might break things.

Then your team, processes or technology are not built with constant change in mind and you decrease the speed in which you release to have better control. This is a downward spiral that will only lead to slower processes, less innovation and you losing your market.

Fear stops experiments and promotes stagnation.

Having a repeatable and easily automated process can drastically reduce that fear. The more you execute that automated process the safer you feel.  Over time this will become stronger as all potential issues get discovered and fixed.

A second very important improvement is getting deep insights into any processes and workflows happening in your application.

When you continuously deploy changes to your application, being able to trace any step your application makes becomes your main tool to debug your production system.

The insight you can gain quickly from looking through your logs will often show you the problem a recent deploy brought into your infrastructure immediately.

This is indispensable with Continuous Delivery.

While Metrics are a great and very important part of getting insight into your infrastructure they only represent the state of the system. Additionally to understand the state you need to be able to trace and deeply understand anything happening in your infrastructure.

Integration with Pager systems provide an additional level of security on top of that to always be aware of problems happening.

Let there be light
We've grown accustomed to have full insight into our testing and deployment process as we are and have been using Codeship to build Codeship for a very long time. We understood we needed that same insight into our application as well to build the kind of infrastructure that supports the quality we want to deliver.

A good logging strategy and overview on your most important workflows is necessary.

Defining a graph of all the states that a workflow can have in your system makes it easy to add logging to each of those states and the transitions between them. At that point logging is not an afterthought, but integral part of your software development effort.

This needs to be clearly communicated to your team so everyone follows it thoroughly.

For example we test and deploy code for thousands of companies with many different language and infrastructure requirements. Those companies connect to GitHub or Bitbucket as their source code management system and various hosting providers like Heroku or AWS to deploy to. There are many moving parts in that system so we need to be able to detect and debug problems anytime without effort. A well thought out logging strategy can help tremendously and make it easy to fix issues when they come up. We can follow any build through all of our infrastructure and correlate issues between builds or our infrastructure anytime.

You can read more about how we use Logentries at Codeship in an earlier post.

Grand Central Logging
Being able to follow any workflow means collecting logs of various services in one place. If your developers have to look through various sources for correlating potential problems productivity and the speed to resolve an issue become very low.

Conclusion
Continuous Delivery is the future of building high class software. Automated testing and deployment build the basis of the workflow, but many other tools like centralized logging and error reporting are important building blocks of that workflow as well.

When your system feels like a black box you will hesitate to release changes to that infrastructure. Make sure you're not stuck with that black box and build a workflow that makes your team more productive and increases your products quality.

If you want to learn more about Continuous Delivery you can also take a look at our crash course which can be found on the Codeship homepage.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure.