Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Zakia Bouachraoui, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, @DXWorldExpo

@DevOpsSummit: Article

Multi-Threaded Apps By @GrabnerAndi | @DevOpsSummit [#DevOps]

Why apps show high response time contribution to web requests coming from worker threads

How to Analyze Problems in Multi-Threaded Applications

As part of my Share Your PurePath and Performance Clinic initiatives I get to see lots of interesting problems out there. This time I picked two examples that just came in this week from Balasz and Daniel. Both wanted my opinion on why their apps show high response time contribution to their web requests coming from worker threads that seem to be either in I/O or in a Wait state. The question was what are these threads waiting for and whether this is something that could be optimized to speed up these slow response times they see on some of their critical web requests.

For both apps it turned out that the developers chose to "offload" work items to a pool of worker threads. This is of course a very valid design pattern. The way it was implemented though didn't fully leverage the advantage that multi-threading can give you. We identified two patterns that I will now describe in more detail in the hope that your multi-threaded applications are not suffering from these performance anti-patterns:

  1. Sequential instead of parallel execution of background threads
  2. Many parallel background threads using "non-shareable" resources

I also want to take the opportunity to give you some tips on how to "read" Dynatrace PurePaths. Over the years I've developed my own technique and I hope you find my approach worthwhile to test on your own data.

Pattern #1: Sequential Execution of Threads and a bit more ...
I already tweeted last week. I created the following slide for Balasz to present to his development team including tips on what I look for when analyzing a PurePath:

PurePath showing that the main web request thread executes 3 RMI calls followed by two background threads where the second one actually starts when the first is finished - that's sequential execution

The many callouts I put on this slide might be a bit overwhelming. Let me explain the highlights.

The overall request response time as measured on the web server is 9.896s. The contributors to this response time are pointed out in the callouts that I numbered. Here is short version of it:

  1. The main service request thread on the Java servlet container takes 3203s where 1.770s is almost entirely spent in I/O.
  2. The main service request thread makes 3 RMI calls. The "Elapsed Time" (=Timestamp on when that method was called in relation to the start of the request) shows us that the first one executes after 366ms followed by the second and third in sequential order
  3. After the third RMI call - at exactly 1.800s Elapsed Time the first background thread is started. This thread takes 1.429s to execute.
  4. The second background worker thread was supposed to run in parallel - at least based on what Balasz told me. The Elapsed Time column however shows that it was executed AFTER the first background thread was done. This was not intended that way. It also explains the 1.770s that show up as I/O time on the main service thread. It is mainly the time that the main thread waited until the first background thread was done just to kick off the second background thread.
  5. The second background thread was now really executed in parallel letting the main service request thread finish. This second background thread generates and writes the generated HTML using the HttpServletResponse Output Stream. This also explains the long waiting time on the Web Server as it was waiting 6.690s for that asynchronous background thread to write data to the output stream.

Lesson Learned: Verify if your threads execute as intended
Looking at the actual time stamp (=Elapsed Time) tells you when your threads really get started and whether they execute in parallel or not. It's a great way to verify your actual implementation and lets you learn which component is waiting for which other component.

For more patterns and lessons learned, click here for the full article.

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
DXWorldEXPO LLC announced today that Telecom Reseller has been named "Media Sponsor" of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.