Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Elizabeth White, Pat Romanski, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Blog Post

Log Aggregation Across Dev and Ops | @DevOpsSummit #DevOps #Docker #Containers #Microservices

Modern tools for log aggregation can be hugely enabling for DevOps approaches

Using Log Aggregation Across Dev & Ops: The Pricing Advantage
By Rob Thatcher, Co-founder and Principal Consultant at Skelton Thatcher Consulting.

Summary: the pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams.

Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team collaboration. We need to choose our log aggregation tools carefully to make sure that we don't introduce unintended barriers or silos driven by unhelpful pricing.

Enabling Devops

Avoid ‘Singleton' Log Aggregation That Exists Only in Production
Organizations considering commercial tooling for log management are often sold on the idea that the single most important location for log collection is the Production (Live) environment. Whilst first class logging facilities for live environments are hugely valuable, if we fail to provide log message collection in upstream environments (QA, Dev, Test, Pre-Prod, etc.) we miss out on a significant opportunity to discover how the application and infrastructure behave before reaching Production.

One of the most common causes of these ‘singleton' tools - tools that exist only in Production - is that the tool is licensed per server. We see that per-server licenses tend to drive down the number of servers on which the tool (or agent) is installed; in this scenario, licenses are purchased only for Production machines, leaving upstream environments with no log aggregation. Contrast this with other approaches to tool licensing, such as charging per GB of data transferred, or number of messages/queries per month, which provide a much better scaling model for the tool, and encourage teams to use the tool in Dev or Test environments as well as in Production.

If our log aggregation tool is available only in Production, developers tend to view logging as ‘just an Ops thing' and have little incentive or reason to care about improving log messages or adding useful metrics. However, if developers and operations people all use the same tooling for searching logs across all environments (Dev, Test, Prod) then they can begin to collaborate on a common view of the system behavior, gaining valuable insights into improving operability.

Selling the Benefits of Log Aggregation
Working with a variety of clients over the past 18 months, we have found that the benefits of log aggregation are often not well understood by technical teams.

Time and time again we speak to developers and operations people who are astonished at how easy it is to search for and find information in logs once those logs are in a central log aggregation tool.

They never want to go back to manual log ‘scraping' again.

It's therefore crucial when evaluating log aggregation tools to find products that can as much demonstrate the fundamentals of log aggregation as show off a specific vendor's product. The Free tier or 30-day demo version of the tool needs to have sufficient capability so that we can demonstrate end-to-end value: event streams from a whole set of servers, server-level metrics, integration with time-series graphing tools and dashboarding tools, and pre-baked queries (we particularly like Logentries for its features here). Teams often need to convince budget holders to pay for what seems initially like a packed version of existing free tools but - if demonstrated well and adequately evangelized - rapidly becomes an obvious new core capability.

When we make sure that the subsequent rollout of log aggregation spans both development and IT operations, then we lay the foundations for a culture of shared responsibility where diagnostics and time-to-recovery are highly valued, a crucial feature of modern web systems.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
In this presentation, you will learn first hand what works and what doesn't while architecting and deploying OpenStack. Some of the topics will include:- best practices for creating repeatable deployments of OpenStack- multi-site considerations- how to customize OpenStack to integrate with your existing systems and security best practices.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.
"DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great team at @DevOpsSUMMIT and CloudEXPO tell the world how they can leverage this emerging disruptive trend."