Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Stackify Blog, Jnan Dash, Liz McMillan, Janakiram MSV

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo

@DevOpsSummit: Blog Post

Key Performance Metrics: Part 2 By @GrabnerAndi | @DevOpsSummit [#DevOps]

A look at the set of metrics captured from within the application server as well as the interaction with the database

Key Performance Metrics for Load Tests Beyond Response Time | Part 2

In Part I of this blog I explained which metrics on the Web Server, App Server and Host allow me to figure out how healthy the system and application environment is: Busy vs. Idle Threads, Throughput, CPU, Memory, etc.

In Part 2, I focus on the set of metrics captured from within the application server (#Exceptions, Errors, etc.) as well as the interaction with the database (connection pools, roundtrips to database, amount of data loaded, etc.). Most of the screenshots shown in this blog comes from real performance data shared from our Dynatrace Free Trial users that leveraged my Share Your PurePath program where I helped them analyze the data they captured. I also hope you comment on this blog and share your metrics with the larger performance testing community.

1. Top Database Activity Metrics
The database is accessed by the application. Therefore I capture most of my database metrics from the application itself by looking into executed SQL Statements:

  • Average # SQLs per User Over Time
  • If #SQLs per average user goes up we most likely have a data-driven problem. The more data in the database - the more SQLs we execute
  • Do we cache data, e.g: Search Results? Then this number should not go up but rather down as data should come from the cache.
  • Total # SQL Statements
    • Should at a max go up with number of simulated users
    • Otherwise it is a sign of bad caching or data driven problems.
  • Slowest SQL Statements
    • Are there individual SQLs that can be optimized both on SQL level or in the database?
    • Do we need additional indices?
    • Can we cache result data of some of these heavy statements?
  • SQLs called very frequently
    • Do we have an N+1 Query Problem?
    • Can we cache some of that data if it is requested over and over again?
  • The following screenshot shows a custom dashboard showing the number of database statements executed over time and on average per transaction/user:

    Over time the number of SQLs should go down per end user as certain data should be cached. Otherwise we may have data driven or caching problems.

    The following screenshot shows the my Database Dashboard that provides several different diagnostics option to identify problematic database

    access patterns and slow SQLs:

    Optimize individual SQLs but also reduce the execution of SQLs if results can be cached.

    2. Top Connection Pool Metrics
    Every application uses Connection Pools to access the database. Connection Leaks, holding on too long on connections or not properly sized pools can result in performance problems. Here are my key metrics:

    • Connection Pool Utilization
      • Are the pools properly sized based on the expected load per runtime (JVM, CLR, PHP...)?
      • Are pools constantly exhausted? Do we have a connection leak?
    • Connection Acquisition Time
      • Are we perfectly configured and just need the amount of connections in the pool?
      • Or do we see increasing Acquisition time (time it takes to get a connection from the pool) which tells us we need more connections to fulfill the demand.

    The following screenshot shows a custom dashboard showing JDBC Connection Pool Metrics captured from WebLogic via JMX:

    Are connection pools correctly sized in relation with incoming transactions? Do we have connection leaks?

    The following screenshot shows a Database Dashboard automatically calculating key metrics per connection pool:

    Acquisition Time tells us how long a transaction needs to wait to acquire the next connection from the pool. This should be close to zero.

    Click here for the full article.

    More Stories By Andreas Grabner

    Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @DevOpsSummit Stories
    Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.
    Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developers don't think about, but you can even use Docker with ASP.NET.
    If you are part of the cloud development community, you certainly know about “serverless computing,” almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to the serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud runs it for you. These functions are triggered by events that follow certain rules. Functions are written in a fixed set of languages, with a fixed set of programming models and cloud-specific syntax and semantics. Cloud-specific services can be invoked to perform complex tasks. So for cloud-na...
    Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
    In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we've lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the...