Welcome!

DevOps Journal Authors: ITinvolve Blog, Plutora Blog, Trevor Parsons, Jason Bloomberg, Pat Romanski

Related Topics: DevOps Journal, Java, SOA & WOA, Linux, Cloud Expo, Big Data Journal

DevOps Journal: Blog Post

Software Quality Metrics for Your Continuous Delivery Pipeline | Part 3

It is safe to say that insightful logging and performance are two opposite goals

Let me ask you a question: would you say that you have implemented logging correctly for your application? Correct in the sense that it will provide you with all the insights you require to keep your business going once your users are struck by errors? And in a way that does not adversely impact your application performance? Honestly, I bet you have not. Today I will explain why you should turn off logging completely in production because of its limitations:

  • Relies on Developers
  • Lacks Context
  • Impacts Performance

Intrigued? Bear with me and I will show you how you can still establish and maintain a healthy and useful logging strategy for your deployment pipeline, from development to production, guided by metrics.

What Logging Can Do for You
Developers, including myself, often write log messages because they are lazy. Why should I set a breakpoint and fire up a debugger if it is so much more convenient to dump something to my console via a simple println()? This simple yet effective mechanism also works on headless machines where no IDE is installed, such as staging or production environments:

System.out.println("Been here, done that.");

Careful coders would use a logger to prevent random debug messages from appearing in production logs and additionally use guard statements to prevent unnecessary parameter construction:

if (logger.isDebugEnabled()) {
logger.debug("Entry number: " + i + " is " + String.valueOf(entry[i]));
}

Anyways, the point about logging is that that traces of log messages allow developers to better understand what their program is doing in execution. Does my program take this branch or that branch? Which statements were executed before that exception was thrown? I have done this at least a million of times, and most likely so have you:

if (condition) {
logger.debug("7: yeah!")
} else {
logger.debug("8: DAMN!!!")
}

Test Automation Engineers, usually developers by trade, equally use logging to better understand how the code under test complies with their test scenarios:

class AgentSpec extends spock.lang.Specification {

def "Agent.compute()"() {
def agent = AgentPool.getAgent()

when:
def result = agent.compute(TestFixtures.getData())

then:
logger.debug("result: "  + result);
result == expected
}
}

Logging is, undoubtedly, a helpful tool during development and I would argue that developers should use it as freely as possible if it helps them to understand and troubleshoot their code.

In production, application logging is useful for tracking certain events, such as the occurrence of a particular exception, but it usually fails to deliver what it is so often mistakenly used for: as a mechanism for analyzing application failures in production. Why?

Because approaches to achieving this goal with logging are naturally brittle: their usefulness depends heavily on developers, messages are without context, and if not designed carefully, logging may severely slow down your application.

Secretly, what you are really hoping to get from your application logs, in the one or the other form, is something like this:

A logging strategy that delivers out-of-the-box using dynaTrace: user context, all relevant data in place, zero config

The Limits of Logging
Logging Relies on Developers
Let's face it: logging is, inherently, a developer-centric mechanism. The usefulness of your application logs stands and falls with your developers. A best practice for logging in production says: "don't log too much" (see Optimal Logging @ Google testing blog). This sounds sensible, but what does this actually mean? If we recall the basic motivation behind logging in production, we could equally rephrase this as "log just enough information you need to know about a failure that enables you to take adequate actions". So, what would it take your developers to provide such actionable insights? Developers would need to correctly anticipate where in the code errors would occur in production. They would also need to collect any relevant bits of information along an execution path that bear these insights and, last but not least, present them in a meaningful way so that others can understand, too. Developers are, no doubt, a critical factor to the practicality of your application logs.

Logging Lacks Context
Logging during development is so helpful because developers and testers usually examine smaller, co-located units of code that are executed in a single thread. It is fairly easy to maintain an overview under such simulated conditions, such as a test scenario:

13:49:59 INFO com.company.product.users.UserManager - Registered user ‘foo'.
13:49:59 INFO com.company.product.users.UserManager - User ‘foo' has logged in.
13:49:59 INFO com.company.product.users.UserManager - User ‘foo' has logged out.

But how can you reliably identify an entire failing user transaction in a real-life scenario, that is, in a heavily multi-threaded environment with multiple tiers that serve piles of distributed log files? I say, hardly at all. Sure, you can go mine for certain isolated events, but you cannot easily extract causal relationships from an incoherent, distributed set of log messages:

13:49:59 INFO com.company.product.users.UserManager - User ‘foo' has logged in.
13:49:59 INFO com.company.product.users.UserManager - User ‘bar' has logged in.
...
13:49:60 SEVERE org.hibernate.exception.JDBCConnectionException: could not execute query
at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:99)
...

After all, the ability to identify such contexts is key to deciding why a particular user action failed.

Logging Impacts Performance
What is a thorough logging strategy worth if your users cannot use your application because it is terribly slow? In case you did not know, logging, especially during peak load times, may severely slow down your application. Let's have a quick look at some of the reasons:

Writing log messages from the application's memory to persistent storage, usually to the file system, demands substantial I/O (see Top Performance Mistakes when moving from Test to Production: Excessive Logging). Traditional logger implementations wrote files by issuing synchronous I/O requests, which put the calling thread into a wait state until the log message was fully written to disk.

In some cases, the logger itself may cause a decent bottle-neck: in the Log4j library (up to version 1.2), every single log activity results in a call to an internal template method Appender.doAppend() that is synchronized for thread-safety (see Multithreading issues - doAppend is synchronised?). The practical implication of this is that threads, which log to the same Appender, for example a FileAppender, must queue up with any other threads writing logs. Consequently, the application spends valuable time waiting in synchronization instead of doing whatever the app was actually designed to do. This will hurt performance, especially in heavily multi-thread environments like web application servers.

These performance effects can be vastly amplified when exception logging comes into play: exception data, such as error message, stack trace and any other piggy-backed exceptions ("initial cause exceptions") greatly increase the amount of data that needs to be logged. Additionally, once a system is in a faulty state, the same exceptions tend to appear over and over again, further hurting application performance. We had once monitored a 30% drawdown on CPU resources due to more than 180,000 exceptions being thrown in only 5 minutes on one of our application servers (see Performance Impact of Exceptions: Why Ops, Test and Dev need to care). If we had written these exceptions to the file system, they would have trashed I/O, filled up our disk space in no time and had considerably increased our response times.

Subsequently, it is safe to say that insightful logging and performance are two opposite goals: if you want the one, then you have to make a compromise on the other.

For more logging tips click here for the full article.

More Stories By Martin Etmajer

Martin Etmajer has several years of experience as a Software Architect as well as in monitoring and managing performance in highly available cluster environments. In his current role as a Technology Strategist at the Compuware APM Center of Excellence, he contributes to the strategic development of Compuware’s dynaTrace APM solution, where he focuses on performance monitoring in Cloud technologies and along the Continuous Delivery deployment pipeline. Reach him at @metmajer

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
Log data provides the most granular view into what is happening across your systems, applications, and end users. Logs can show you where the issues are in real-time, and provide a historical trending view over time. Logs give you the whole picture. Logentries, a log management and analytics service built for the cloud, has announced a new integration with Slack, the team communication platform, to enable real-time system and application monitoring. Users of both services can now receive real-time notifications, log-level details with timestamps, and event context inside of designated Slack ...
“DevOps is really about the business. The business is under pressure today, competitively in the marketplace to respond to the expectations of the customer. The business is driving IT and the problem is that IT isn't responding fast enough," explained Mark Levy, Senior Product Marketing Manager at Serena Software, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
“The year of the cloud – we have no idea when it's really happening but we think it's happening now. For those technology providers like Zentera that are helping enterprises move to the cloud - it's been fun to watch," noted Mike Loftus, VP Product Management and Marketing at Zentera Systems, in this SYS-CON.tv interview at Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Dale Kim is the Director of Industry Solutions at MapR. His background includes a variety of technical and management roles at information technology companies. While his experience includes work with relational databases, much of his career pertains to non-relational data in the areas of search, content management, and NoSQL, and includes senior roles in technical marketing, sales engineering, and support engineering. Dale holds an MBA from Santa Clara University, and a BA in Computer Science from the University of California, Berkeley.
Cloud Technology Partners on Wednesday announced it has been recognized by the Modern Infrastructure Impact Awards as one of the Best Amazon Web Services (AWS) Consulting Partners. Selected by the editors of TechTarget's SearchDataCenter.com, and by votes from customers and strategic channel partners, the companies acknowledged by the Modern Infrastructure Impact Awards represent the top providers of cloud consulting services for AWS including application migration, application development, infrastructure modernization, DevOps and more.
“We help people build clusters, in the classical sense of the cluster. We help people put a full stack on top of every single one of those machines. We do the full bare metal install," explained Greg Bruno, Vice President of Engineering and co-founder of StackIQ, in this SYS-CON.tv interview at 15th Cloud Expo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
"Blue Box has been around for 10-11 years, and last year we launched Blue Box Cloud. We like the term 'Private Cloud as a Service' because we think that embodies what we are launching as a product - it's a managed hosted private cloud," explained Giles Frith, Vice President of Customer Operations at Blue Box, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
SYS-CON Media announced that Splunk, a provider of the leading software platform for real-time Operational Intelligence, has launched an ad campaign on Big Data Journal. Splunk software and cloud services enable organizations to search, monitor, analyze and visualize machine-generated big data coming from websites, applications, servers, networks, sensors and mobile devices. The ads focus on delivering ROI - how improved uptime delivered $6M in annual ROI, improving customer operations by mining large volumes of unstructured data, and how data tracking delivers uptime when it matters most.
The move in recent years to cloud computing services and architectures has added significant pace to the application development and deployment environment. When enterprise IT can spin up large computing instances in just minutes, developers can also design and deploy in small time frames that were unimaginable a few years ago. The consequent move toward lean, agile, and fast development leads to the need for the development and operations sides to work very closely together. Thus, DevOps becomes essential for any ambitious enterprise today. This Lunchtime Power Panel at DevOps Summit (http:/...
Puppet Labs on Wednesday released the DevOps Salary Report, based on salary data gathered from Puppet Labs' industry-recognized State of DevOps Report. The data confirms that market demand for DevOps skills is growing, and that DevOps engineers are among the highest paid IT practitioners today. That's because IT organizations today are grappling with how to be more agile and responsive to the business, while maintaining the stability of their infrastructure. DevOps practices, such as continuous delivery and strong cross-team collaboration, are proven to increase both agility and reliability: H...
AppDynamics, the application intelligence leader for software-defined businesses, announced the general availability of the AppDynamics Fall '14 Release. Serving the combined needs of IT and business teams across the enterprise, the latest release provides a comprehensive view across all aspects of digital performance in ultra large scale deployments. AppDynamics delivers Application Intelligence by building out advanced capabilities across the key areas of analytics, unified monitoring and DevOps. The Fall '14 Release of the AppDynamics Application Intelligence platform introduces: powe...
"ElasticBox is an enterprise company that makes it very easy for developers and IT ops to collaborate to develop, build and deploy applications on any cloud - private, public or hybrid," stated Monish Sharma, VP of Customer Success at ElasticBox, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
“We are strong believers in the DevOps movement and our staff has been doing DevOps for large enterprise environments for a number of years. The solution that we build is intended to allow DevOps teams to do security at the speed of DevOps," explained Justin Lundy, Founder & CTO of Evident.io, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The term culture has had a polarizing effect among DevOps supporters. Some propose that culture change is critical for success with DevOps, but are remiss to define culture. Some talk about a DevOps culture but then reference activities that could lead to culture change and there are those that talk about culture change as a set of behaviors that need to be adopted by those in IT. There is no question that businesses successful in adopting a DevOps mindset have seen departmental culture change, the question remains, is culture the leading edge of this change? I posit that in large enterprises,...
In a world of ever-accelerating business cycles and fast-changing client expectations, the cloud increasingly serves as a growth engine and a path to new business models. Dynamic clouds enable businesses to continuously reinvent themselves, adapting their business processes, their service and software delivery and their operations to achieve speed-to-market and quick response to customer feedback. As the cloud evolves, the industry has multiple competing cloud technologies, offering on-premises and off-premises cloud platforms for both Infrastructure as a Service (IaaS) and Platform as a Servi...
"Our premise is Docker is not enough. That's not a bad thing - we actually love Docker. At ActiveState all our products are based on open source technology and Docker is an up-and-coming piece of open source technology," explained Bart Copeland, President & CEO of ActiveState Software, in this SYS-CON.tv interview at DevOps Summit at Cloud Expo®, held Nov 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
Leysin American School is an exclusive, private boarding school located in Leysin, Switzerland. Leysin selected an OpenStack-powered, private cloud as a service to manage multiple applications and provide development environments for students across the institution. Seeking to meet rigid data sovereignty and data integrity requirements while offering flexible, on-demand cloud resources to users, Leysin identified OpenStack as the clear choice to round out the school's cloud strategy. Additionally, the school sought a partner to provide OpenStack infrastructure deployment and operations expert...

ARMONK, N.Y., Nov. 20, 2014 /PRNewswire/ --  IBM (NYSE: IBM) today announced that it is bringing a greater level of control, security and flexibility to cloud-based application development and delivery with a single-tenant version of Bluemix, IBM's platform-as-a-service. The new platform enables developers to build ap...

DevOps is all about agility. However, you don't want to be on a high-speed bus to nowhere. The right DevOps approach controls velocity with a tight feedback loop that not only consists of operational data but also incorporates business context. With a business context in the decision making, the right business priorities are incorporated, which results in a higher value creation. In his session at DevOps Summit, Todd Rader, Solutions Architect at AppDynamics, discussed key monitoring techniques that facilitate higher value creation for your enterprise.
The move in recent years to cloud computing services and architectures has added significant pace to the application development and deployment environment. When enterprise IT can spin up large computing instances in just minutes, developers can also design and deploy in small time frames that were unimaginable a few years ago. The consequent move toward lean, agile, and fast development leads to the need for the development and operations sides to work very closely together. Thus, DevOps becomes essential for any ambitious enterprise today.
High-performing enterprise Software Quality Assurance (SQA) teams validate systems that are ready for use - getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time - bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of Enterprise SQA? Does the SQA function become redundant? In her session at DevOps Summit, Anne Hungat...
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
Between the compelling mockups and specs produced by your analysts and designers, and the resulting application built by your developers, there is a gulf where projects fail, costs spiral out of control, and applications fall short of requirements. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a new approach where business and development users collaborate – each using tools appropriate to their goals and expertise – to build mockups and enhance them all the way through functional prototypes, to final working applications. Lea...
The 4th International DevOps Summit, co-located with16th International Cloud Expo – being held June 9-11, 2015, at the Javits Center in New York City, NY – announces that its Call for Papers is now open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 2...
"SOASTA built the concept of cloud testing in 2008. It's grown from rather meager beginnings to where now we are provisioning hundreds of thousands of servers on a daily basis on behalf of customers around the world to test their applications," explained Tom Lounibos, CEO of SOASTA, in this SYS-CON.tv interview at DevOps Summit, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.