Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, Linux Containers, Containers Expo Blog, Server Monitoring

@DevOpsSummit: Blog Feed Post

Configuration with #Jenkins | @DevOpsSummit #DevOps #APM #Monitoring

Configuration settings that are enabled during the application development are very useful

When Configuration Settings from Development Wreak Havoc in Production
By Asad Ali

As applications are promoted from the development environment to the CI or the QA environment and then into the production environment, it is very common for the configuration settings to be changed as the code is promoted. For example, the settings for the database connection pools are typically lower in development environment than the QA/Load Testing environment. The primary reason for the existence of the configuration setting differences is to enhance application performance. However, occasionally there are instances where the application code is mistakenly promoted into production without changing these settings. In such cases, such promotion of code can cause performance havoc in the production environment. This blog describes one such scenario.

During their proof of concept, a prospect requested we identify and resolve an issue that they were observing in the production environment. They had just promoted a newer version of their critical application to production and soon after the promotion of the code they stared to see significant increase in the response time when the end users tried to login to their web application. To diagnose the issue the prospect injected our agents into their production JVMs that were exhibiting the issue to the end user. With Dynatrace agents injected, we observed both high response time for login and that most of the time for the login request was spent in class loading.

Additionally, breakdown of the response time showed that most of the time for login web request was spent in synchronization (92%).

The response time breakdown of the login request clearly shows that 92% of the time is spent in synchronization.

The response time breakdown of the login request clearly shows that 92% of the time is spent in synchronization.

With the response time breakdown showing the highest amount of time in synchronization we took a thread dump on the JVM where the login request was being processed to get insight into thread locking issue. The thread dump showed 67 threads that were blocked in the JVM.

67 Threads were blocked on the JVM.

67 Threads were blocked on the JVM.

Locking Hotspots view shows the breakdown of the blocked threads

Locking Hotspots view shows the breakdown of the blocked threads

Further analysis of the content of the thread dump showed that most of the blocked threads were waiting for the resource (CompoundClassLoader) held by one running thread (Thread Id 5570678).

Thread Id 5570678 owned the monitor on which the other threads were blocked.

Thread Id 5570678 owned the monitor on which the other threads were blocked.

CompoundClassLoader is held by Thread 5570678.

CompoundClassLoader is held by Thread 5570678.

All blocked threads are waiting on CompoundClassLoader.

All blocked threads are waiting on CompoundClassLoader.

The thread stack trace for the running thread showed that it is trying to load the class from the file system. Examining the full trace of the thread dump shows that the threads on this JVM are creating a new facelet every time a web request is received.

getFacelet being called every time a request is processed by the thread.

getFacelet being called every time a request is processed by the thread.

While it is very normal for a JVM to spend some time to load classes when the JVM is initially started, it is NOT normal (or good for performance) for the class loading to continue even after the JVM has warmed up and most of the classes are already loaded.

The thread stack showed call to getFacelet(java.util.URL) method in the com.sun.facelets.impl.DefaultFaceletFactory class. Review of the source of this class showed that the method tries to load the class if the method needsToBeRefreshed() returns true.

Code snippet of com.sun.facelets.impl.DefaultFaceletFactory class.

Code snippet of com.sun.facelets.impl.DefaultFaceletFactory class.

And finally the code for needsToBeRefreshed() clearly shows that it returns true  if the refreshPeriod is set to 0.

needsToBeRefreshed() method shows that if the refresh period is set to 0, the class is refreshed every time.

needsToBeRefreshed() method shows that if the refresh period is set to 0, the class is refreshed every time.

The Facelets are capable of precompiling if you set javax.faces.FACELETS_REFRESH_PERIOD to -1. However, once set to -1, the JSF never re-compile/re-parses the Facelets files and holds the entire SAX-compiled/parsed XML tree.

Insert11

In the development process the REFRESH_PERIOD is typically set to 0 because it allows the developers to keep editing the Facelets file without the need to restart the server. What happened at this prospect is that the application code was promoted into production with the REFRESH_PERIOD is to 0 value and hence every time the user tried to login, the Facelets were forced to recompile and that in turn resulted in high response time.

Conclusion
Configuration settings that are enabled during the application development are very useful as they reduce the number of times the application server has to be started to test code changes. However, as this example shows, it is very important to disable development-level settings as the code moves out of the development environment because they can cause performance havoc in the production environment. One of the best practices to eliminate occurrences of such scenario is to make the configuration changes a part of the continuous integration tools like Jenkins.

The post When configuration settings from development wreak havoc in production appeared first on about:performance.

More Stories By APM Blog

APM: It’s all about application performance, scalability, and architecture: best practices, lifecycle and DevOps, mobile and web, enterprise, user experience

@DevOpsSummit Stories
This session will provide an introduction to Cloud driven quality and transformation and highlight the key features that comprise it. A perspective on the cloud transformation lifecycle, transformation levers, and transformation framework will be shared. At Cognizant, we have developed a transformation strategy to enable the migration of business critical workloads to cloud environments. The strategy encompasses a set of transformation levers across the cloud transformation lifecycle to enhance process quality, compliance with organizational policies and implementation of information security and data privacy best practices. These transformation levers cover core areas such as Cloud Assessment, Governance, Assurance, Security and Performance Management. The transformation framework presented during this session will guide corporate clients in the implementation of a successful cloud solu...
So the dumpster is on fire. Again. The site's down. Your boss's face is an ever-deepening purple. And you begin debating whether you should join the #incident channel or call an ambulance to deal with his impending stroke. Yes, we know this is a developer's fault. There's plenty of time for blame later. Postmortems have a macabre name because they were once intended to be Viking-like funerals for someone's job. But we're civilized now. Sort of. So we call them post-incident reviews. Fires are never going to stop. We're human. We miss bugs. Or we fat finger a command - deleting dozens of servers and bringing down S3 in US-EAST-1 for hours - effectively halting the internet. These things happen.
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert research and analysis.Attendees can join this session to better understand how DevSecOps teams are applying lessons from W. Edwards Deming (circa 1982), Malcolm Goldrath (circa 1984) and Gene Kim (circa 2013) to improve their ability to respond to new business requirements and cyber risks.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.