Welcome!

@DevOpsSummit Authors: Pat Romanski, Jason Bloomberg, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo

@DevOpsSummit: Blog Post

Key Performance Metrics: Part 2 By @GrabnerAndi | @DevOpsSummit [#DevOps]

A look at the set of metrics captured from within the application server as well as the interaction with the database

Key Performance Metrics for Load Tests Beyond Response Time | Part 2

In Part I of this blog I explained which metrics on the Web Server, App Server and Host allow me to figure out how healthy the system and application environment is: Busy vs. Idle Threads, Throughput, CPU, Memory, etc.

In Part 2, I focus on the set of metrics captured from within the application server (#Exceptions, Errors, etc.) as well as the interaction with the database (connection pools, roundtrips to database, amount of data loaded, etc.). Most of the screenshots shown in this blog comes from real performance data shared from our Dynatrace Free Trial users that leveraged my Share Your PurePath program where I helped them analyze the data they captured. I also hope you comment on this blog and share your metrics with the larger performance testing community.

1. Top Database Activity Metrics
The database is accessed by the application. Therefore I capture most of my database metrics from the application itself by looking into executed SQL Statements:

  • Average # SQLs per User Over Time
  • If #SQLs per average user goes up we most likely have a data-driven problem. The more data in the database - the more SQLs we execute
  • Do we cache data, e.g: Search Results? Then this number should not go up but rather down as data should come from the cache.
  • Total # SQL Statements
    • Should at a max go up with number of simulated users
    • Otherwise it is a sign of bad caching or data driven problems.
  • Slowest SQL Statements
    • Are there individual SQLs that can be optimized both on SQL level or in the database?
    • Do we need additional indices?
    • Can we cache result data of some of these heavy statements?
  • SQLs called very frequently
    • Do we have an N+1 Query Problem?
    • Can we cache some of that data if it is requested over and over again?
  • The following screenshot shows a custom dashboard showing the number of database statements executed over time and on average per transaction/user:

    Over time the number of SQLs should go down per end user as certain data should be cached. Otherwise we may have data driven or caching problems.

    The following screenshot shows the my Database Dashboard that provides several different diagnostics option to identify problematic database

    access patterns and slow SQLs:

    Optimize individual SQLs but also reduce the execution of SQLs if results can be cached.

    2. Top Connection Pool Metrics
    Every application uses Connection Pools to access the database. Connection Leaks, holding on too long on connections or not properly sized pools can result in performance problems. Here are my key metrics:

    • Connection Pool Utilization
      • Are the pools properly sized based on the expected load per runtime (JVM, CLR, PHP...)?
      • Are pools constantly exhausted? Do we have a connection leak?
    • Connection Acquisition Time
      • Are we perfectly configured and just need the amount of connections in the pool?
      • Or do we see increasing Acquisition time (time it takes to get a connection from the pool) which tells us we need more connections to fulfill the demand.

    The following screenshot shows a custom dashboard showing JDBC Connection Pool Metrics captured from WebLogic via JMX:

    Are connection pools correctly sized in relation with incoming transactions? Do we have connection leaks?

    The following screenshot shows a Database Dashboard automatically calculating key metrics per connection pool:

    Acquisition Time tells us how long a transaction needs to wait to acquire the next connection from the pool. This should be close to zero.

    Click here for the full article.

    More Stories By Andreas Grabner

    Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


    @DevOpsSummit Stories
    DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lavi, a Nutanix DevOps Solution Architect, explored the ways that Nutanix technologies empower teams to react faster than ever before and connect teams in ways that were either too complex or simply impossible with traditional infrastructures.
    In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereum.
    "At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    @CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in multiple vertical markets. Our delegate profiles can be located in our show prospectus.
    In today's always-on world, customer expectations have changed. Competitive differentiation is delivered through rapid software innovations, the ability to respond to issues quickly and by releasing high-quality code with minimal interruptions. DevOps isn't some far off goal; it's methodologies and practices are a response to this demand. The demand to go faster. The demand for more uptime. The demand to innovate. In this keynote, we will cover the Nutanix Developer Stack. Built from the foundation of software-defined infrastructure, Nutanix has rapidly expanded into full application lifecycle management across any infrastructure or cloud .Join us as we delve into how the Nutanix Developer Stack makes it easy to build hybrid cloud applications by weaving DBaaS, micro segmentation, event driven lifecycle operations, and both financial and cloud governance together into a single unified st...