@DevOpsSummit Authors: Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Make Sense of Errors and Logging By @Stackify | @DevOpsSummit [#DevOps]

While errors and logs are often instrumental to diagnosing application issues, getting the most out of them isn't easy

Three Ways to Make Sense of Errors & Logging
By Craig Ferril

Errors and log files are two of the most important tools a developer have to try and find the source of a problem.  If you're like most developers, your approach to capturing and utilizing errors and logs is fairly straightforward. You probably send log output to a file or a log aggregation product. You may notify on the occurrence of errors, either sending emails directly from your code, or via an error monitoring product.

What's lacking from these approaches is something a bit more holistic, comprehensive, and contextual. The trouble comes in two forms:

  • There's often way more noise than signal if you're solely relying on logs to track, isolate, and make sense out of your errors, especially if an error is being thrown over and over again, or if you're dealing with log files across numerous servers
  • If you're focused primarily on errors, either emailing on error occurrences or using an error monitoring program, that approach removes the relevant logs from the picture altogether, leaving you without the context you need to determine root cause.

In this article , I'll cover three ways you can make sense of your errors and logs together:

  1. Aggregation - If you're developing an application that runs on a single server, finding all of your logs isn't an issue for you.  But it's far more likely that you have applications hosted on multiple servers for purposes of availability, scalability and redundancy, making it more difficult to easily (and centrally) access errors and logging data. Tools exist to aggregate logs in various standard formats (assuming you have access ), which is a good step in the right direction, given the potential for numerous separate logging files, as well as log file rotation and retention issues. The right answer is to implement a solution that aggregates logs and errors with development in mind. That way, you can be sure you are collecting every piece of information necessary and have it presented in a way that's geared toward developers.
  2. Error De-duplication - While aggregation ensures that all of your logs and errors end up in a central location, that can lead to a lot of noise that hides the truly valuable insights that are hiding in your logs. Taking a step beyond simply aggregating log statements, toward deriving fast insights from your logs and errors, means implementing a strategy that de-duplicates errors and provides additional information anchored to each incident of the error, without forcing you to wade through an endless stream of error statements in a log. Treating individual errors as first-class items of interest, rather than just yet another line in the log file, gives you top-level visibility, enables you to configure effective notification and resolution strategies focused on a specific exception, and, with the right platform, gives you an anchor point for seeing only the log statements related to that error (rather than sifting through all log statements to find the ones that matter). This all adds up to a strategy that filters out all the noise and focuses your efforts in on just what you care about.
  3. Analysis - Even if you can aggregate your data and associate the error and logging data together, you still are left with a very long chronological list of stuff your application did (and didn't do - thus, the exceptions).  There are still several needs to be addressed before we can truly say we can make sense of this data set - issues like seeing the frequency of errors, tying exceptions on one server with methods and processes on another, being able to search quickly through this massive data set, and even just being able to quickly jump to a particular point in time - all of these, and more, need to be part of the solution to properly make sense of the data you have.

While errors and logs are often instrumental to diagnosing application issues, getting the most out of them isn't easy. If you're using a narrowly focused tool or rolling your own solution, it's likely you're either struggling to quickly get to the data you need when you need it, or you're trying to find a needle in a haystack (or, perhaps more apt, a needle in a needle stack). Creating effective error notifications, error de-duplication, log aggregation and analysis, and seamless correlation between errors and just the log statements that are relevant presents an especially difficult challenge. Getting it right requires tremendous custom development, a mix of custom development on top of a product that offers a partial solution, or possibly adopting multiple solutions that each only solve part of the problem. That is, of course, unless you use Stackify Smart Error and Log Management!

To get a more in-depth look at evolving your application troubleshooting, read the whitepaper 3 Steps to Evolve your Application Troubleshooting .

Photo Credit: Windell Oskay

More Stories By Stackify Blog

Stackify offers the only developers-friendly solution that fully integrates error and log management with application performance monitoring and management. Allowing you to easily isolate issues, identify what needs to be fixed quicker and focus your efforts – Support less, Code more. Stackify provides software developers, operations and support managers with an innovative cloud based solution that gives them DevOps insight and allows them to monitor, detect and resolve application issues before they affect the business to ensure a better end user experience. Start your free trial now stackify.com

@DevOpsSummit Stories
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
For better or worse, DevOps has gone mainstream. All doubt was removed when IBM and HP threw up their respective DevOps microsites. Where are we on the hype cycle? It's hard to say for sure but there's a feeling we're heading for the "Peak of Inflated Expectations." What does this mean for the enterprise? Should they avoid DevOps? Definitely not. Should they be cautious though? Absolutely. The truth is that DevOps and the enterprise are at best strange bedfellows. The movement has its roots in the tech community's elite. Open source projects and methodologies driven by the alumni of companies like Netflix, Google and Amazon. This is a great thing for the evolution of DevOps. It can be alienating for Enterprise IT though. Learning about Netflix and their simian armies, or Facebook and their mind-melting scale is fascinating. Can you take it back to the office on Monday morning though?
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examining how the Internet and the cloud has allowed for the democratization of IT, resulting in an increased demand for the cloud and the drive to develop new ways to utilize it.