Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Stackify Blog, Jnan Dash, Liz McMillan, Janakiram MSV

Related Topics: @DevOpsSummit, Java IoT, @DXWorldExpo

@DevOpsSummit: Article

Multi-Threaded Apps By @GrabnerAndi | @DevOpsSummit [#DevOps]

Why apps show high response time contribution to web requests coming from worker threads

How to Analyze Problems in Multi-Threaded Applications

As part of my Share Your PurePath and Performance Clinic initiatives I get to see lots of interesting problems out there. This time I picked two examples that just came in this week from Balasz and Daniel. Both wanted my opinion on why their apps show high response time contribution to their web requests coming from worker threads that seem to be either in I/O or in a Wait state. The question was what are these threads waiting for and whether this is something that could be optimized to speed up these slow response times they see on some of their critical web requests.

For both apps it turned out that the developers chose to "offload" work items to a pool of worker threads. This is of course a very valid design pattern. The way it was implemented though didn't fully leverage the advantage that multi-threading can give you. We identified two patterns that I will now describe in more detail in the hope that your multi-threaded applications are not suffering from these performance anti-patterns:

  1. Sequential instead of parallel execution of background threads
  2. Many parallel background threads using "non-shareable" resources

I also want to take the opportunity to give you some tips on how to "read" Dynatrace PurePaths. Over the years I've developed my own technique and I hope you find my approach worthwhile to test on your own data.

Pattern #1: Sequential Execution of Threads and a bit more ...
I already tweeted last week. I created the following slide for Balasz to present to his development team including tips on what I look for when analyzing a PurePath:

PurePath showing that the main web request thread executes 3 RMI calls followed by two background threads where the second one actually starts when the first is finished - that's sequential execution

The many callouts I put on this slide might be a bit overwhelming. Let me explain the highlights.

The overall request response time as measured on the web server is 9.896s. The contributors to this response time are pointed out in the callouts that I numbered. Here is short version of it:

  1. The main service request thread on the Java servlet container takes 3203s where 1.770s is almost entirely spent in I/O.
  2. The main service request thread makes 3 RMI calls. The "Elapsed Time" (=Timestamp on when that method was called in relation to the start of the request) shows us that the first one executes after 366ms followed by the second and third in sequential order
  3. After the third RMI call - at exactly 1.800s Elapsed Time the first background thread is started. This thread takes 1.429s to execute.
  4. The second background worker thread was supposed to run in parallel - at least based on what Balasz told me. The Elapsed Time column however shows that it was executed AFTER the first background thread was done. This was not intended that way. It also explains the 1.770s that show up as I/O time on the main service thread. It is mainly the time that the main thread waited until the first background thread was done just to kick off the second background thread.
  5. The second background thread was now really executed in parallel letting the main service request thread finish. This second background thread generates and writes the generated HTML using the HttpServletResponse Output Stream. This also explains the long waiting time on the Web Server as it was waiting 6.690s for that asynchronous background thread to write data to the output stream.

Lesson Learned: Verify if your threads execute as intended
Looking at the actual time stamp (=Elapsed Time) tells you when your threads really get started and whether they execute in parallel or not. It's a great way to verify your actual implementation and lets you learn which component is waiting for which other component.

For more patterns and lessons learned, click here for the full article.

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
Cloud-Native thinking and Serverless Computing are now the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, as well as the public sector. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential. DevOpsSUMMIT at CloudEXPO expands the DevOps community, enable a wide sharing of knowledge, and educate delegates and technology providers alike.
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developers don't think about, but you can even use Docker with ASP.NET.
If you are part of the cloud development community, you certainly know about “serverless computing,” almost a misnomer. Because it implies there are no servers which is untrue. However the servers are hidden from the developers. This model eliminates operational complexity and increases developer productivity. We came from monolithic computing to client-server to services to microservices to the serverless model. In other words, our systems have slowly “dissolved” from monolithic to function-by-function. Software is developed and deployed as individual functions – a first-class object and cloud runs it for you. These functions are triggered by events that follow certain rules. Functions are written in a fixed set of languages, with a fixed set of programming models and cloud-specific syntax and semantics. Cloud-specific services can be invoked to perform complex tasks. So for cloud-na...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex to learn. This is because Kubernetes is more of a toolset than a ready solution. Hence it’s essential to know when and how to apply the appropriate Kubernetes constructs.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It's clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we've lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the...