Welcome!

@DevOpsSummit Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Aruna Ravichandran

Related Topics: Java IoT, Apache

Java IoT: Blog Feed Post

Stress & Load Testing Web Apps (Even ADF & Apex) Using Apache JMeter

I can't recommend enough the importance of stress testing your web applications

A couple of years ago I presented Take a load off! Load testing your Oracle Apex or JDeveloper web applications at OOW and AUSOUG. I can't recommend enough the importance of stress testing your web applications, it's saved my bacon a number of times. Frequently as developers, we develop under a single user (developer) model where concurrency issues are easily avoided. When our programs hit production, with just 1 more user, suddenly our programs grind to a halt or fall over in bizarre places. Result, pie on developers' faces, users' faith in new technologies destroyed, and general gnashing of teeth all round. Some simple stress and load tests can head off problems way before they hit production.

(For the remainder of this post I'll infer "stress testing" and "load testing" as the same thing, though strictly speaking one tests for your application falling over, and the other how fast it responds under load)

So how to go about stress testing a web application?

There are numerous tools available to stress test web applications, paid and free. This post will look at the setup and use of Apache's JMeter, my tool of choice, mainly because it is free! ... to undertake a very simple stress test. Apache JMeter is available here, version 2.3.3 at time of writing.

On starting JMeter (<jmeter-home>/bin/jmeter.bat on Windows) you'll see the following:

Creating a Thread Group
From here what we want to do is set up a Thread Group that simulates a number of users (concurrent sessions), done by right clicking the Test Plan node -> Thread Group option. This results in:


As you can see the Thread Group allows us to set a number of threads to simulate concurrent users/sessions, loop through tests and more.

Creating HTTP Requests
From here we can create a number of HTTP requests (Test Plan node right click -> Add -> Sampler -> HTTP Requests) to simulate each HTTP request operation (Get, Post etc), HTTP headers, payloads and more. However in a standard user session between server and browser there can be a huge array of these requests and configuring these HTTP requests within JMeter would be a major pain.

Configuring the HTTP Proxy Server

However there's an easier way. Apache JMeter can work as a proxy between your browser and server and record a user's HTTP session, namely the individual HTTP requests, that can be re-played in a JMeter Thread Group later.

To set this up instead right click the Workbench node, Add -> Non-Test Elements -> HTTP Proxy Server:


To configure the HTTP Proxy Server do the following:

* Port – set to a number that wont clash with an existing HTTP server on your PC (say 8085)
* Target Controller – set to "Test Plan > Thread Group". When the proxy server records the HTTP session between your browser and server, this setting implies the HTTP requests will be recorded against the Thread Group you created earlier, so we can reuse them later
* URL Patterns to include – a regular expression based string that tells the proxy server which URLs to record, and those to ignore. To capture everything set it to .* (dot star). Be warned that during recording however, if you use your browser for anything else but accessing the server you wish to stress test, JMeter will also capture that traffic. This includes periodic refreshes by web applications such as Gmail or Google Docs that you don't even initiate; I'm pretty sure when replaying your stress test, Google would prefer you not to stress test their infrastructure for them; stick to your own for now ;-)

The end HTTP Proxy Server setting will look something like this:


You'll note the HTTP Proxy Server has a Start button. We can't use this just yet.

Configuring your Browser
In order for the JMeter HTTP Proxy Server to capture the traffic between your server and browser, you need to make some changes to your browser's configuration. I'm assuming you're using Firefox 3 in the following example, but same approximate steps are needed for Internet Explorer.

Under Firefox open the Tools -> Options menu, then Advanced icon, Network tab, Settings button which will open the Connection Settings dialog.

In the Connection Settings dialog set the following:

* Select the Manual proxy configuration radio button
* HTTP Proxy – localhost
* Port – 8085 as per the JMeter HTTP Proxy Server option we set earlier
* No Proxy for – ensure that localhost and 127.0.0.1 aren't in the exclusion list


The above setup makes an assumption that the server you want to access is accessibly without a further external proxy required.


Recording your HTTP session
Once the browser's proxy is setup, to record a session between the browser and server do the following:

1) In Apache JMeter hit the Start button on the HTTP Proxy Server page
2) In your browser enter the URL of the first page in the application you want to stress test

Thereafter as you navigate your web application, enter data and so on, JMeter will faithfully record each HTTP request between the browser in server against your Thread Group. This may not be immediately obvious, but expand the Thread Group and you'll see each HTTP request made from the browser to server:


As can be seen, even visiting 1 web page can generate a huge amount of traffic. Ensure to stop recording the HTTP session by selecting the Stop button in the JMeter HTTP Proxy Server page.

Configuring the Thread Group for replay
Once you've recorded the session in the Thread Group there are a couple of extra things we need to achieve.

For web application's that use Cookies and session IDs (JDeveloper's ADF uses a JSessionID for tracking sessions) to track each unique user session, we cannot replay the exact HTTP request sequence with the server through JMeter, as the session ID is pegged to the recorded session, not the upcoming stress test sessions.

To solve this in JMeter right click the Thread Group -> Add -> Config Element -> HTTP Cookie Manager. This will be added as the last element to the Thread Group. I usually move it to the top of the tree:


Next we need to configure the Thread Group to show us the results of the stress test. There are a number of different ways to do this, from graphing the responses, to showing the raw HTTP responses. In this post we'll take the later option.

Right click the Thread Group -> Add -> Listener -> View Results in Tree, which will add a View Results in Tree node to the end of the Thread Group:


Finally save the Thread Group by selecting it in the node tree, then File -> Save.

Running the Thread Group
To commence your first stress test run, it's best to leave the number of spawned sessions to 1, just to see the overall test will work in it's most basic form. The default Thread Group number of threads is set to 1, so there is no need to make a change to do this.

To run the test, simply select the Run menu -> Start. On running the Thread Group, you'll see the top right of JMeter has a little box that tells if it's still running, and the number of tests to go vs total number of tests:


Once the tests are complete, this indicator will grey out.

We can now visit the View Results Tree:


This shows the HTTP requests that were sent out and on selecting an individual request, you see the raw HTTP request and the actual response. You'll note the small green triangles showing a successful HTTP 200 result. If different HTTP errors occur the triangles show different colours. Also remember that sometimes application errors don't perculate up to the HTTP layer in your web application, so you should check your application's logs too (in the case of a JEE application, this will be your container's internal logs).

Running a Stress Test
The obvious step from here is to change the Thread Group number of threads to a higher number.

From here take time out to explore the other features in JMeter. It includes a wide range of features that in particular make it useful for regression testing.

Caveats
Firstly remember when doing this you're not only stress testing your application, your stress testing a server, potentially stress testing databases, stress testing your networks and so on. Therefore you can have an affect on anybody sharing those resources. "Hard core" stress tests should be on separate infrastructure, after hours, aiming for as little impact on those around you!

Also keep in mind, besides seeing your application fall over at 2 users, 10 users, 100 users, which is an important test, try to be realistic about your stress tests. Stress testing you're brand-new-application to a 1 million concurrent users is probably not being realistic. How many concurrent user requests do you really expect and what response times do you need? Normally when I ask managers this question they'll answer with, "oh we have 1000 concurrent users, the application must support that many at any one time". However what they really mean is the application has 1000 users, potentially all logged into the application (ie. sessions) at the same time, but not necessarily hitting the server with HTTP requests at any onetime.

Read the original blog entry...

More Stories By Chris Muir

Chris Muir, an Oracle ACE Director, senior developer and trainer, and frequent blogger at http://one-size-doesnt-fit-all.blogspot.com, has been hacking away as an Oracle consultant with Australia's SAGE Computing Services for too many years. Taking a pragmatic approach to all things Oracle, Chris has more recently earned battle scars with JDeveloper, Apex, OID and web services, and has some very old war-wounds from a dark and dim past with Forms, Reports and even Designer 100% generation. He is a frequent presenter and contributor to the local Australian Oracle User Group scene, as well as a contributor to international user group magazines such as the IOUG and UKOUG.

@DevOpsSummit Stories
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching of virtual storage services to its enterprise market.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lavi, a Nutanix DevOps Solution Architect, explored the ways that Nutanix technologies empower teams to react faster than ever before and connect teams in ways that were either too complex or simply impossible with traditional infrastructures.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing, CA Technologies. "It's this results-driven combination of technology and business that makes me so passionate about DevOps and its future in the industry. I am truly honored to take on this co-chair role, and look forward to working with the DevOps Summit team at Cloud Expo and attendees to advance DevOps."
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making real-time decisions based on a combination of real user monitoring, synthetic testing, APM, NGINX / local load balancers, and other data sources, is critical.
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's application ecosystem, tweaking existing software to stitch various components together leads to sub-optimal solutions. This definitely deserves a re-think, and paves the way for a new breed of lightweight application servers that are micro-services and DevOps ready!
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp empowers global organizations to unleash the full potential of their data to expand customer touchpoints, foster greater innovation and optimize their operations.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. This leads to a waste of cloud resources and increased operational overhead.
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbuilding of data centers to house increasing amounts of storage infrastructure.
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with legacy components, and also go over automated capabilities provided by operators to auto-update Kubernetes with zero downtime for current and secure deployments.
SYS-CON Events announced today that Avere Systems, a leading provider of hybrid cloud enablement solutions, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere Systems was created by file systems experts determined to reinvent storage by changing the way enterprises thought about and bought storage resources. With decades of experience behind the company’s founders, Avere got its start in 2008 with a mission to use fast, flash-based storage in the most efficient, effective manner possible. What the team had discovered was a technology that optimized storage resources and reduced dependencies on sprawling storage installations. Launched as the Avere OS, this advanced file system not only boosted performance within standard, on-premises, network-attached storage systems but ...
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous architectural and coordination work to minimize the volatility of the cloud environment and leverage the security features of the cloud to the benefit of the CICD pipeline.
SYS-CON Events announced today that SkyScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. SkyScale is a world-class provider of cloud-based, ultra-fast multi-GPU hardware platforms for lease to customers desiring the fastest performance available as a service anywhere in the world. SkyScale builds, configures, and manages dedicated systems strategically located in maximum-security facilities, allowing customers to focus on results while minimizing capital equipment investment.
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: implement always-vigilant DNS security
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using new techniques such as feature flagging, rollouts, and traffic splitting, experimentation is no longer just the future for marketing teams, it’s quickly becoming an essential practice for high-performing development teams as well.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
When it comes to cloud computing, the ability to turn massive amounts of compute cores on and off on demand sounds attractive to IT staff, who need to manage peaks and valleys in user activity. With cloud bursting, the majority of the data can stay on premises while tapping into compute from public cloud providers, reducing risk and minimizing need to move large files. In his session at 18th Cloud Expo, Scott Jeschonek, Director of Product Management at Avere Systems, discussed the IT and business benefits that cloud bursting provides, including increased compute capacity, lower IT investment, financial agility, and, ultimately, faster time-to-market.
Is advanced scheduling in Kubernetes achievable? Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, will answer these questions and demonstrate techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.