Welcome!

DevOps Journal Authors: Pat Romanski, Yeshim Deniz, Elizabeth White, Mike Kavis, Roger Strukhoff

Related Topics: DevOps Journal, Java, .NET, Linux, Cloud Expo, Big Data Journal

DevOps Journal: Article

Choosing the APM System that Is Right for You

A lot of the arguing in the APM space is about the fundamental approach to monitoring application transactions

In my role as technology evangelist I spend a lot of time helping organizations, big and small, make their IT systems better, faster and more resilient to faults in order to support their business operations and objectives. I always find it frustrating to "argue" with our competitors about what the best solution is. I honestly think that many APM tools on the market do a good job - each with advantages and disadvantages in certain use cases. There is no "one size fits all" - there is just a "this tool fits best for your APM Maturity Level" (not saying the others wouldn't do a good job).

A lot of the arguing in the APM space is about the fundamental approach to monitoring application transactions: monitor and capture ALL details vs. monitor and capture relevant details. Along with that come topics like "overhead impact", "scalability" and "data hording vs smart analytics".

Ultimately, you want to pick the right tool to solve your problems. As you have multiple tools to choose from let me - in my role as technology evangelist - highlight some of the use cases that our customers solve. As a technologist and a blogger, what I really care about is that the right technology is applied to the right problem. As such, I feel compelled to share what I have learned working with customers in the trenches. Hopefully, this will help you understand the technology and what problem it can solve in real life problems, and cut through the propaganda. Let me start with a few use cases today and follow up with some more in follow up blog posts.

Use Cases from Steven - A Performance Engineer
The first use cases are picked from Steven - whom I reached out to after I read his question on our APM Community Forum. His company decided to move from a competitor to our APM solution and I wondered why. In an email, he highlighted that he had some initial success with the tool, and had been able to solve a couple of low hanging problems. When they decided to start taking a strategic Continuous Delivery approach to software delivery, they realized that the current tool had certain shortcomings slowing their attempts to practice DevOps.

They identified the following key problems they need to solve and what they really required from an APM solution in order to get to where they are heading:

How a user got to a problem, and not just seeing the problem itself

  • Every transaction, with all details they need, out-of-the-box
  • Web request/response bytes, SQL bind values, exception details for every transaction

Number of transactions executed per user and tenant used for business and cost reporting

  • Capture custom business context data for every transaction
  • Business transactions based on "buried" context data as not every detail is in the URL

Eliminate homegrown tools which are costly to maintain

  • Provide application as well as system and infrastructure monitoring
  • Integrate with other tools such as JMeter, LoadRunner, Jenkins or HP Open View

Eliminate the need to make people look at other tools and data

  • Foster collaboration across Architect, Dev, Test & Ops by using same data set
  • Data must be shareable with a single click

Ability to extend to custom frameworks, systems and protocols

  • Bring in custom metrics from external tools via Java Plugin infrastructure
  • Follow transactions across any custom protocol or technologies outside Java & .NET

Full Automation to support Continuous Delivery

  • Use Metrics provided by APM for every build artifact along the deployment pipeline to act as quality gateway
  • Inform APM about new deployments to prevent false alerting

Replace traditional application logging

  • Eliminated log files which saves I/O and storage
  • Get the log messages captured in context of a transaction and the context of the user that triggered that log message

One solution for everything

  • Not just performance monitoring but also business reporting as well as deep dive diagnostics

Active community forum

  • Get answers right away
  • Leverage extensions already provided by the community such as plugins for Jenkins, PagerDuty, ...

Let me give you some examples for Steven's use case so that you can better decide on whether that is relevant for you as well:

Every Transaction with All Details
dynaTrace was built from the ground up to support the full software lifecycle. We as Compuware APM/dynaTrace understood that we needed a technology that captures every transaction with all details for root cause diagnostics as well as proper business monitoring without falling into a sampling mode where you lose critical information for both business and root cause diagnostics. Most of our customers claim they see little to acceptable overhead in production yet capturing 100% transactions including method arguments, SQL Statements, Log Messages or Exceptions. The magic word in our case is our PurePath (see the YouTube video) & PureStack Technology which allows dynaTrace to do exactly that. One of the several visualization of the PurePath is the Transaction Flow which is a great way to understand how your transactions flow through the system - where your hotspots are (3rd party impact, custom code issues or impact of Garbage Collection) and where your architectural issues (e.g: too many web service calls, too many SQL executions):

Transaction Flow: One View that tells it all to Devs, Architects and Operations Teams

What if you don't capture all transactions but be "smart" and focus on capturing the problematic ones? While this approach allows you to find and fix the easy-to-find problems that can be analyzed by analyzing those transactions that fail or violate the average response-time based baseline, it falls short when it comes to problems that are caused by transactions that are not "outside the norm". One example here is a database deadlock we recently analyzed for a customer. The "smart" approach only highlighted the transaction that hit the deadlock but no information was captured for those transactions actually causing the deadlock with their data manipulations. Being able to see which transactions executed which UPDATE statements at the time leading up to the deadlock is required to solve this problem.

As companies - such as Steven's - are getting into a maturity level where they grow out of "smart" average response time-based analysis it is important to have the ability to look at everything and not just the average problem. As a follow up read the blog Why Averages Suck and Percentiles are great!

Capture Custom Business Context
What is Custom Business Context? The actual business function executed such as a "Create Claim", "Transfer Money," or the name of the user or tenant of your system. Why is this not as easy as it sounds? Because many applications just don't show the business function as part of the URL or provide the user name in a cookie. A great example was given in a webinar by NJM Insurance (New Jersey Manufacturing Insurance). They were using a third-party claim management software which was designed to "hide" everything behind a claimCenter.do URL. In their case they needed dynaTrace to analyze every single transaction and pick a method argument invoked in the business layer of their app to figure out which function in their system was actually executed. On top of that they also needed to know the user that executed that function because they needed to understand which insurance office and group of employees created how many claims as they needed this for their quarterly business reports. The following shows business reporting based on the user role where the user role gets captured from a method argument within the business logic of the application:

Business Reporting requires Business Context data for every Transaction

This was only possible because dynaTrace allows you to selectively capture business context in the context of every single executed transaction. Along the PurePath you will then see things like method arguments, return values, bind values, session variables, HTTP parameters or cookie values. All to be later used for your business reporting or targeted root cause diagnostics. Here is a follow up blog post that explains business transactions in more technical detail.

For more APM Buyer's tips, and for further insight, click here for the full article.

More Stories By Andreas Grabner

Andreas Grabner has more than a decade of experience as an architect and developer in the Java and .NET space. In his current role, Andi works as a Technology Strategist for Compuware and leads the Compuware APM Center of Excellence team. In his role he influences the Compuware APM product strategy and works closely with customers in implementing performance management solutions across the entire application lifecycle. He is a frequent speaker at technology conferences on performance and architecture-related topics, and regularly authors articles offering business and technology advice for Compuware’s About:Performance blog.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Latest Stories from DevOps Journal
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option through September. On site registration price of $1,95 will be set to 'free' for delegates who register during special offer. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site. Your DevOps Summit registration will also allow access to @ThingsExpo sessions and exhibits. Register For DevOps Summit "FREE" (limited time) ▸ Here
The old monolithic style of building enterprise applications just isn't cutting it any more. It results in applications and teams both that are complex, inefficient, and inflexible, with considerable communication overhead and long change cycles. Microservices architectures, while they've been around for a while, are now gaining serious traction with software organizations, and for good reasons: they enable small targeted teams, rapid continuous deployment, independent updates, true polyglot languages and persistence layers, and a host of other benefits. But truly adopting a microservices architecture requires dramatic changes across the entire organization, and a DevOps culture is absolutely essential.
High performing enterprise Software Quality Assurance (SQA) teams validate systems are ready for use – getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time – bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of Enterprise SQA? Does the SQA function become redundant?
Achieve continuous delivery of applications by leveraging ElasticBox and Jenkins. In his session at DevOps Summit, Monish Sharma, VP of Customer Success at ElasticBox, will demonstrate how you can achieve the following using ElasticBox and the ElasticBox Jenkins Plugin: Create consistency across dev, staging, and production environments Continuous delivery across multiple clouds to handle high loads Ensure consistent policy management across environments: tagging, admin boxes, traceability Spin up machines and environments quickly Deploy applications to any cloud Enable real-time collaboration between developers and operations
Docker offers a new, lightweight approach to application portability. Applications are shipped using a common container format and managed with a high-level API. Their processes run within isolated namespaces that abstract the operating environment independently of the distribution, versions, network setup, and other details of this environment. This "containerization" has often been nicknamed "the new virtualization." But containers are more than lightweight virtual machines. Beyond their smaller footprint, shorter boot times, and higher consolidation factors, they also bring a lot of new features and use cases that were not possible with classical virtual machines.
WaveMaker CEO Samir Ghosh is taking a new pass at aPaas, and leveraging the increasingly popular Docker open-source platform, with the announcement of WaveMaker Enterprise. The new version of the company's eponymous software “enables instant, end-to-end custom web app creation and management by professional and non-professional developers (alike) and development teams,” according to the company. We asked Samir a few questions about this, and here's what he had to say: Cloud Computing Journal: You've mentioned the previous challenge of business-side developers making that jump from design to deployment. What sort of learning curve will they still face with Wavemaker Enterprise? Samir Ghosh: “Business-side developers” can include non-programming business users or professional developers under tight schedules or with limited mobile or front-end programming expertise. Both can use WaveMaker to meet their app development needs, but may have different deployment needs. I think business users just want their app to run as easily as possible. In WaveMaker, they can literally click a button and their application will run, either on our public cloud or on the enterprise’s private...
Leysin American School is an exclusive, private boarding school located in Leysin, Switzerland. Leysin selected an OpenStack-powered, private cloud as a service to manage multiple applications and provide development environments for students across the institution. Seeking to meet rigid data sovereignty and data integrity requirements while offering flexible, on-demand cloud resources to users, Leysin identified OpenStack as the clear choice to round out the school's cloud strategy. Additionally, the school sought a partner to provide OpenStack infrastructure deployment and operations expertise. They ultimately selected Blue Box’s Private Cloud as a Service, powered by OpenStack, leveraging Blue Box's Zurich, Switzerland data center.
In a world of ever-accelerating business cycles and fast-changing client expectations, the cloud increasingly serves as a growth engine and a path to new business models. Dynamic clouds enable businesses to continuously reinvent themselves, adapting their business processes, their service and software delivery and their operations to achieve speed-to-market and quick response to customer feedback. As the cloud evolves, the industry has multiple competing cloud technologies, offering on-premises and off-premises cloud platforms for both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). In parallel, cloud standards are also evolving, including community standards like OpenStack and CloudFoundry. Most organizations who are adopting the Cloud today are ending up adopting it in complex ‘dynamic-hybrid’ environments. There is physical infrastructure that now co-exists along with the new dynamic-hybrid on-premises and off-premises Cloud hosted environments.
This story came in from Joseph – one of our fellow dynaTrace users and a performance engineer at a large fleet management service company. Their fleet management software runs on .NET, is developed in-house, is load tested with JMeter and monitored in Production with dynaTrace. A usage and configuration change of their dependency injection library turned out to dramatically impact CPU and memory usage while not yet impacting end user experience. Lessons learned: resource usage monitoring is as important as response time and throughput. On Wednesday, July 3, Joseph’s ops team deployed the latest version into their production environment. Load (=throughput) and response time are two key application health measures the application owner team has on their production dashboards.
The recent trends like cloud computing, social, mobile and Internet of Things are forcing enterprises to modernize in order to compete in the competitive globalized markets. However, enterprises are approaching newer technologies with a more silo-ed way, gaining only sub optimal benefits. The Modern Enterprise model is presented as a newer way to think of enterprise IT, which takes a more holistic approach to embracing modern technologies. This model makes use of Composable Enterprise framework put forward by Jonathan Murray of WMG.