Welcome!

@DevOpsSummit Authors: Liz McMillan, Zakia Bouachraoui, Yeshim Deniz, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Make Sense of Errors and Logging By @Stackify | @DevOpsSummit [#DevOps]

While errors and logs are often instrumental to diagnosing application issues, getting the most out of them isn't easy

Three Ways to Make Sense of Errors & Logging
By Craig Ferril

Errors and log files are two of the most important tools a developer have to try and find the source of a problem.  If you're like most developers, your approach to capturing and utilizing errors and logs is fairly straightforward. You probably send log output to a file or a log aggregation product. You may notify on the occurrence of errors, either sending emails directly from your code, or via an error monitoring product.

What's lacking from these approaches is something a bit more holistic, comprehensive, and contextual. The trouble comes in two forms:

  • There's often way more noise than signal if you're solely relying on logs to track, isolate, and make sense out of your errors, especially if an error is being thrown over and over again, or if you're dealing with log files across numerous servers
  • If you're focused primarily on errors, either emailing on error occurrences or using an error monitoring program, that approach removes the relevant logs from the picture altogether, leaving you without the context you need to determine root cause.

In this article , I'll cover three ways you can make sense of your errors and logs together:

  1. Aggregation - If you're developing an application that runs on a single server, finding all of your logs isn't an issue for you.  But it's far more likely that you have applications hosted on multiple servers for purposes of availability, scalability and redundancy, making it more difficult to easily (and centrally) access errors and logging data. Tools exist to aggregate logs in various standard formats (assuming you have access ), which is a good step in the right direction, given the potential for numerous separate logging files, as well as log file rotation and retention issues. The right answer is to implement a solution that aggregates logs and errors with development in mind. That way, you can be sure you are collecting every piece of information necessary and have it presented in a way that's geared toward developers.
  2. Error De-duplication - While aggregation ensures that all of your logs and errors end up in a central location, that can lead to a lot of noise that hides the truly valuable insights that are hiding in your logs. Taking a step beyond simply aggregating log statements, toward deriving fast insights from your logs and errors, means implementing a strategy that de-duplicates errors and provides additional information anchored to each incident of the error, without forcing you to wade through an endless stream of error statements in a log. Treating individual errors as first-class items of interest, rather than just yet another line in the log file, gives you top-level visibility, enables you to configure effective notification and resolution strategies focused on a specific exception, and, with the right platform, gives you an anchor point for seeing only the log statements related to that error (rather than sifting through all log statements to find the ones that matter). This all adds up to a strategy that filters out all the noise and focuses your efforts in on just what you care about.
  3. Analysis - Even if you can aggregate your data and associate the error and logging data together, you still are left with a very long chronological list of stuff your application did (and didn't do - thus, the exceptions).  There are still several needs to be addressed before we can truly say we can make sense of this data set - issues like seeing the frequency of errors, tying exceptions on one server with methods and processes on another, being able to search quickly through this massive data set, and even just being able to quickly jump to a particular point in time - all of these, and more, need to be part of the solution to properly make sense of the data you have.

While errors and logs are often instrumental to diagnosing application issues, getting the most out of them isn't easy. If you're using a narrowly focused tool or rolling your own solution, it's likely you're either struggling to quickly get to the data you need when you need it, or you're trying to find a needle in a haystack (or, perhaps more apt, a needle in a needle stack). Creating effective error notifications, error de-duplication, log aggregation and analysis, and seamless correlation between errors and just the log statements that are relevant presents an especially difficult challenge. Getting it right requires tremendous custom development, a mix of custom development on top of a product that offers a partial solution, or possibly adopting multiple solutions that each only solve part of the problem. That is, of course, unless you use Stackify Smart Error and Log Management!

To get a more in-depth look at evolving your application troubleshooting, read the whitepaper 3 Steps to Evolve your Application Troubleshooting .

Photo Credit: Windell Oskay

More Stories By Stackify Blog

Stackify offers the only developers-friendly solution that fully integrates error and log management with application performance monitoring and management. Allowing you to easily isolate issues, identify what needs to be fixed quicker and focus your efforts – Support less, Code more. Stackify provides software developers, operations and support managers with an innovative cloud based solution that gives them DevOps insight and allows them to monitor, detect and resolve application issues before they affect the business to ensure a better end user experience. Start your free trial now stackify.com

@DevOpsSummit Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.
This sixteen (16) hour course provides an introduction to DevOps, the cultural and professional movement that stresses communication, collaboration, integration and automation in order to improve the flow of work between software developers and IT operations professionals. Improved workflows will result in an improved ability to design, develop, deploy and operate software and services faster.
CI/CD is conceptually straightforward, yet often technically intricate to implement since it requires time and opportunities to develop intimate understanding on not only DevOps processes and operations, but likely product integrations with multiple platforms. This session intends to bridge the gap by offering an intense learning experience while witnessing the processes and operations to build from zero to a simple, yet functional CI/CD pipeline integrated with Jenkins, Github, Docker and Azure.