Click here to close now.


@DevOpsSummit Authors: Elizabeth White, Carmen Gonzalez, AppDynamics Blog, Yeshim Deniz, Liz McMillan

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Agile Computing, @BigDataExpo

@DevOpsSummit: Blog Post

Application Failures in Production | @DevOpsSummit [DevOps]

The wealth of out-of-the-box insights you could obtain from a single urgent, albeit unspecific log message

How to Approach Application Failures in Production

In my recent article, "Software Quality Metrics for your Continuous Delivery Pipeline - Part III - Logging," I wrote about the good parts and the not-so-good parts of logging and concluded that logging usually fails to deliver what it is so often mistakenly used for: as a mechanism for analyzing application failures in production. In response to the heated debates on and, I want to demonstrate the wealth of out-of-the-box insights you could obtain from a single urgent, albeit unspecific log message if you only are equipped with the magic ingredient; full transaction context:

Examples of insights you could obtain from full transaction context on a single log message

Bear with me until I get to explain what this actually means and how it helps you get almost immediate answers to the most urgent questions when your users are struck by an application failure:

  • "How many users are affected and who are they?"
  • "Which tiers are affected by which errors and what is the root cause?"

Operator: I'm here because you broke something. (courtesy of

When All You Have Is a Lousy Log Message
Does this story sound familiar to you? It's a Friday afternoon and you just received the release artifacts from the development team belatedly, which need to be released by Monday morning. After spending the night and another day in operations to get this release out into production timely, you notice the Monday after that everything you have achieved in the end was some lousy log message:

08:55:26 SEVERE - LoginException occurred when processing Login transaction

While this scenario hopefully does not reflect a common case for you, it still shows an important aspect in the life of development and operations: working as an operator involves monitoring the production environment and providing assistance in troubleshooting application failures mainly with the help of log messages - things that developers have baked into their code. While certainly not all log messages need to be as poor as this one, getting down to the bottom of a production failure is often a tedious endeavor (see this comment on reddit by RecklessKelly who sometimes needs weeks to get his "Eureka moment") - if at all possible.

Why There Is No Such Thing as a 100% Error-Free Code
Production failures can become a major pain for your business with long-term effects: they will not only make your visitors buy elsewhere, but depending on the level of frustrations, your customers may choose to stay at your competition instead of giving you another chance.

As we all know, we just cannot get rid of application failures in production entirely. Agile methodologies, such as Extreme Programming or Scrum, aim to build quality into our processes; however, there is still no such thing as a 100% error-free application. "We need to write more tests!" you may argue and I would agree: disciplines such as TDD and ATDD should be an integral part of your software development process since they, if applied correctly, help you produce better code and fewer bugs. Still, it is simply impossible to test each and every corner of your application for all possible combinations of input parameters and application state. Essentially, we can run only a limited subset of all possible test scenarios. The common goal of developers and test automation engineers, hence, must be to implement a testing strategy, which allows them to deliver code of sufficient quality. Consequently, there is always a chance that something can go wrong, and, as a serious business, you will want to be prepared for the unpredictable and, additionally, have as much control over it as possible:

Why you cannot get rid of application failures in production: remaining failure probability

Without further ado, let's examine some precious out-of-the-box insights you could obtain if you are equipped with full transaction context and are able to capture all transactions.

Why this is important? Because it enables you to see the contributions of input parameters, processes, infrastructure and users at all times whenever a failure occurred, solve problems faster, and additionally use the presented information such as unexpected input parameters to further improve your testing strategy.

Initial Situation: Aggregated Log Messages
Instead of crawling a bunch of possibly distributed log files to determine the count of particular log messages, we may, first of all, want to have this done automatically for us just as they happen. This gives a good overview on the respective message frequencies and facilitates prioritization:

Aggregated log events: severity, logger name, message and count

What we see here (analysis view based on our PurePath technology) is that there have been 104 occurrences of the same log message in the application. We could also observe other captured event data, such as the severity level and the name of the logger instance (usually the name of the class that created the logger).

Question #1: How many users are affected and who are they?

Failed Business Transactions: "Logins" and "Logins by Username"

Having the full transactional context and not just the log message allows us to figure out which critical Business Transactions of our application are impacted. From the dashboard above we can observe that "Logins" and "Logins by Username" have failed: we see that 61 users attempted the 104 logins and who these users were by their username.

For questions 2 and 3, and for further insight, click here for the full article.

More Stories By Martin Etmajer

Martin Etmajer has 10+ years of experience as a developer and software architect, as well as in maintaining highly available and performant cluster environments. In his current role, Martin works as a Technology Strategist for Dynatrace with a focus on Continuous Delivery and DevOps. He speaks at technology conferences and meetups and publishes articles on this blog. Reach him at @metmajer

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@DevOpsSummit Stories
DevOps is gaining traction in the federal government – and for good reasons. Heightened user expectations are pushing IT organizations to accelerate application development and support more innovation. At the same time, budgetary constraints require that agencies find ways to decrease the cost of developing, maintaining, and running applications. IT now faces a daunting task: do more and react faster than ever before – all with fewer resources.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ability. Many are unable to effectively engage and inspire, creating forward momentum in the direction of desired change. Renowned for its approach to leadership and emphasis on their people, organizations increasingly look to our military for insight into these challenges.
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing system.
Chris Van Tuin, Chief Technologist for the Western US at Red Hat, has over 20 years of experience in IT and Software. Since joining Red Hat in 2005, he has been architecting solutions for strategic customers and partners with a focus on emerging technologies including IaaS, PaaS, and DevOps. He started his career at Intel in IT and Managed Hosting followed by leadership roles in services and sales engineering at Loudcloud and Linux startups.
Saviynt Inc. has announced the availability of the next release of Saviynt for AWS. The comprehensive security and compliance solution provides a Command-and-Control center to gain visibility into risks in AWS, enforce real-time protection of critical workloads as well as data and automate access life-cycle governance. The solution enables AWS customers to meet their compliance mandates such as ITAR, SOX, PCI, etc. by including an extensive risk and controls library to detect known threats and behavior and usage analytics to identify unknown risks.
DevOps is here to stay because it works. Most businesses using this methodology are already realizing a wide range of real, measurable benefits as a result of implementing DevOps, including the breakdown of inter-departmental silos, faster delivery of new features and more stable operating environments. To take advantage of the cloud’s improved speed and flexibility, development and operations teams need to work together more closely and productively. In his session at DevOps Summit, Prashanth Chandrasekar, Founder & General Manager of Rackspace’s DevOps business segment and Co-Founder & Hea...
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show how to distribute them to any kind of consumer, being it a customer or a data center.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, will review the current landscape of DevOps with containers and the benefits. In addition, he will discuss known issues and solutions fo...
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, will explore HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. The problem is there are a lot of moving parts in these designs; this makes assuring performance co...
As a company adopts a DevOps approach to software development, what are key things that both the Dev and Ops side of the business must keep in mind to ensure effective continuous delivery? In his session at DevOps Summit, Mark Hydar, Head of DevOps, Ericsson TV Platforms, will share best practices and provide helpful tips for Ops teams to adopt an open line of communication with the development side of the house to ensure success between the two sides.
For almost two decades, businesses have discovered great opportunities to engage with customers and even expand revenue through digital systems, including web and mobile applications. Yet, even now, the conversation between the business and the technologists that deliver these systems is strained, in large part due to misaligned objectives. In his session at DevOps Summit, James Urquhart, Senior Vice President of Performance Analytics at SOASTA, Inc., will discuss how measuring user outcomes – including how the performance, flow and content of your digital systems affects those outcomes – ca...
In his session at DevOps Summit, Bryan Cantrill, CTO at Joyent, will demonstrate a third path: containers on multi-tenant bare metal that maximizes performance, security, and networking connectivity.
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and live demonstrations of each method. Special emphasis will be put on sysdig, an open source troubleshoo...
SYS-CON Events announced today that IBM Cloud Data Services has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IBM Cloud Data Services offers a portfolio of integrated, best-of-breed cloud data services for developers focused on mobile computing and analytics use cases.
Clutch is now a Docker Authorized Consulting Partner, having completed Docker's certification course on the "Docker Accelerator for CI Engagements." More info about Clutch's success implementing Docker can be found here. Docker is an open platform for developers and system administrators to build, ship and run distributed applications. With Docker, IT organizations shrink application delivery from months to minutes, frictionlessly move workloads between data centers and the cloud and achieve 20x greater efficiency in their use of computing resources. Inspired by an active community and trans...
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attendees will understand how different components work together to solve the problems to manage applicatio...
In their session at DevOps Summit, Asaf Yigal, co-founder and the VP of Product at, and Tomer Levy, co-founder and CEO of, will explore the entire process that they have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. They will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the architecture accomplishes these requirements. Lastly, they will review the gory details of the technolo...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approach. NoOps enables developers to deploy, manage, and scale their own code, creating an infrastructure...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery cycles and a poor customer experience.
The modern software development landscape consists of best practices and tools that allow teams to deliver software in a near-continuous manner. By adopting a culture of automation, measurement and sharing, the time to ship code has been greatly reduced, allowing for shorter release cycles and quicker feedback from customers and users. Still, with all of these tools and methods, how can teams stay on top of what is taking place across their infrastructure and codebase? Hopping between services and command line interfaces creates context-switching that slows productivity, efficiency, and may le...
DevOps and Continuous Delivery software provider XebiaLabs has announced it has been selected to join the Amazon Web Services (AWS) DevOps Competency partner program. The program is designed to highlight software vendors like XebiaLabs who have demonstrated technical expertise and proven customer success in DevOps and specialized solution areas like Continuous Delivery. DevOps Competency Partners provide solutions to, or have deep experience working with AWS users and other businesses to help them implement continuous integration and delivery development patterns or to help them automate infr...