Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Roger Strukhoff, Elizabeth White

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Blog Post

Log Aggregation Across Dev and Ops | @DevOpsSummit #DevOps #Docker #Containers #Microservices

Modern tools for log aggregation can be hugely enabling for DevOps approaches

Using Log Aggregation Across Dev & Ops: The Pricing Advantage
By Rob Thatcher, Co-founder and Principal Consultant at Skelton Thatcher Consulting.

Summary: the pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams.

Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team collaboration. We need to choose our log aggregation tools carefully to make sure that we don't introduce unintended barriers or silos driven by unhelpful pricing.

Enabling Devops

Avoid ‘Singleton' Log Aggregation That Exists Only in Production
Organizations considering commercial tooling for log management are often sold on the idea that the single most important location for log collection is the Production (Live) environment. Whilst first class logging facilities for live environments are hugely valuable, if we fail to provide log message collection in upstream environments (QA, Dev, Test, Pre-Prod, etc.) we miss out on a significant opportunity to discover how the application and infrastructure behave before reaching Production.

One of the most common causes of these ‘singleton' tools - tools that exist only in Production - is that the tool is licensed per server. We see that per-server licenses tend to drive down the number of servers on which the tool (or agent) is installed; in this scenario, licenses are purchased only for Production machines, leaving upstream environments with no log aggregation. Contrast this with other approaches to tool licensing, such as charging per GB of data transferred, or number of messages/queries per month, which provide a much better scaling model for the tool, and encourage teams to use the tool in Dev or Test environments as well as in Production.

If our log aggregation tool is available only in Production, developers tend to view logging as ‘just an Ops thing' and have little incentive or reason to care about improving log messages or adding useful metrics. However, if developers and operations people all use the same tooling for searching logs across all environments (Dev, Test, Prod) then they can begin to collaborate on a common view of the system behavior, gaining valuable insights into improving operability.

Selling the Benefits of Log Aggregation
Working with a variety of clients over the past 18 months, we have found that the benefits of log aggregation are often not well understood by technical teams.

Time and time again we speak to developers and operations people who are astonished at how easy it is to search for and find information in logs once those logs are in a central log aggregation tool.

They never want to go back to manual log ‘scraping' again.

It's therefore crucial when evaluating log aggregation tools to find products that can as much demonstrate the fundamentals of log aggregation as show off a specific vendor's product. The Free tier or 30-day demo version of the tool needs to have sufficient capability so that we can demonstrate end-to-end value: event streams from a whole set of servers, server-level metrics, integration with time-series graphing tools and dashboarding tools, and pre-baked queries (we particularly like Logentries for its features here). Teams often need to convince budget holders to pay for what seems initially like a packed version of existing free tools but - if demonstrated well and adequately evangelized - rapidly becomes an obvious new core capability.

When we make sure that the subsequent rollout of log aggregation spans both development and IT operations, then we lay the foundations for a culture of shared responsibility where diagnostics and time-to-recovery are highly valued, a crucial feature of modern web systems.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.
Your applications have evolved, your computing needs are changing, and your servers have become more and more dense. But your data center hasn't changed so you can't get the benefits of cheaper, better, smaller, faster... until now. Colovore is Silicon Valley's premier provider of high-density colocation solutions that are a perfect fit for companies operating modern, high-performance hardware. No other Bay Area colo provider can match our density, operating efficiency, and ease of scalability.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
The vast majority of organizations today are in the earliest stages of AI initiatives and this shift will be dramatic as more enterprises move forward in the AI journey. Although companies are at different stages of this journey, most agree that finding or developing analytic talent is a key concern and bottleneck for doing more. What if your business could take advantage of the most advanced ML/AI models without the huge upfront time and investment inherent in building an internal ML/AI data scientist team? In this presentation, I will introduce the pros and cons of three pathways: 1. Utilize prepackage ML APIs, 2. Customizable AutoML, 3. Training your your ML models specifically tailored to your business needs. To win with Cloud ML, you will need to know how to choose a right approach in a quicker time frame and without significant investment.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives.