Welcome!

@DevOpsSummit Authors: Elizabeth White, Derek Weeks, Liz McMillan, Steve Wilson, Stackify Blog

Related Topics: @DevOpsSummit, Microservices Expo

@DevOpsSummit: Blog Feed Post

Four Ways to Load Test Your API | @DevOpsSummit #API #DevOps #Microservices

How do you make sure your API can handle even the heaviest load?

By Les Worley

You know the feeling. You're surfing along and then BAM! you get the dreaded

503 (Service unavailable)

The server is currently unavailable (because it is overloaded or

down for maintenance).

Service-Unavailable

Do you remember what you were searching for the last time that happened? Or what company you would have bought from? Probably not.

Worse, if you had just clicked on "Submit Order" and waited and waited... and then got such an error... Did you have any idea whether the order went through, or if your credit card was charged??

Chances are that merchant lost you as a customer forever. You Google'd for someone else and never looked back.

Of course, as a developer you want your service to be heavily used. Otherwise, what's the point? But loads depend on many things: time of day, end of month processing, overnight batch runs, press releases, product launches - you name it.

So how do you make sure your API can handle even the heaviest load? How do you keep your customers from walking away?

Don't Lose Authority from an Unreliable API
That's one example - from a consumer point of view - of an API service that couldn't handle the load. It resulted in a lost customer.

When you publish your web service, you want to be the one that everyone looks to as the authority - especially if they pay to use the API.

Perhaps your API is just for "internal use." But there's always someone that relies on the results. Maybe the web team uses your API to search your company's product catalog. They then display the results to your customers. Busy server, no results - no sale. Maybe your API is not so internal after all.

Either way, the last thing you want is for your users to be screaming about an error like

  • Service unavailable
  • Connection rejected
  • Server timed out
  • Or "Unknown error occurred"

If you're the consumer of a service, like a financial transaction API, you want your app to handle any exception that API throws. But as the API publisher, it's up to you to make sure your API can handle heavy loads so it doesn't throw those exceptions. Or if it does, it does so gracefully and in a well-defined and documented manner.

If you can't guarantee your users predictability under heavy load, you'll lose that authority. They'll pay someone else for a more reliable API.

In other words, you need to test the heck out of it.

There are three typical approaches to load testing an API. These, as well as various hybrids, each have drawbacks.

#1 Faking it - API Mocking
The least useful for load testing are API mocks. These are basically temporary placeholders used during development of an API. Developers use these for early unit testing. But they also provide these "dummy" versions to other teams that need to call the unfinished API from within their own code.

Mocks return hard-coded responses, the formats of which often change during development. They have little "meat" behind them, only canned responses that can be served up immediately. And they are disposable - once the actual API is completed, the code is thrown away. In other words, the effort to create the mocks is wasted after their initial use.

And because mocks aren't yet hooked up to actual data sources, their performance isn't at all representative of the real world.

In summary, mocks may be useful for unit testing, but they aren't representative of the real world.

#2 Cloning It - A Full Test Environment
The mock API approach assumes the API isn't actually complete, or that there's no live data source available. If the API is complete and ready for testing, we can use the clone approach. This approach stands up a full-blown test environment using a snapshot of production data.

disadvantages-of-cloned-environments

Often you'll have a copy of all the applications running, too, because your code relies on other APIs, not just database queries.

The good in this approach is that it's very representative of the live system's performance. If your load test brings the system to its knees, no problem! It hurts no one.

The bad parts outweigh the good though. For one thing, production data often contains sensitive customer information, so data privacy regulations can come into play. Also, if your own API makes use of pay-per-use services, this testing can get very expensive. Finally, production data is stateful, even in a cloned environment. If you need to re-run the tests, you'll have to reload the test bed each time.

In summary, cloned environments give you near-production-quality load test results, without harming production if your test causes it to crater. However, they raise privacy concerns, require constant data reloading, and can be very expensive if calling third party APIs.

#3 Hitting production - Load Testing Live APIs
Sadly, the method so many companies use is to test on production itself. Why? Well, it's live, so it's the most representative of all methods. It's also "easy" -  no separate test environment to stand up and maintain. But that's where the usefulness ends.

hitting_production_is_easiest_but

In production, you can't use the "real" data. Instead you have to maintain separate "test accounts." Using test accounts in a live environment reduces privacy concerns to a degree. These data are stateful, of course, so you have to reset them before each test run. And you still have to worry about the cost of pay-per-use APIs.

It's important to schedule your load test "off hours." This can be tricky, since the Internet allows your customers access 24/7. Even then, if your load test brings the server down, or corrupts production data, you're fired (or you'll soon wish you were).

Remember, Service unavailable signals your customers and their customers that your site - or your API service - is unreliable.

#4 API Virtualization - The Risk-free Approach
You want to perform an exhaustive load test of your API - and keep your existing customers and your job. That rules out hitting the production system.

You also need more than a test version of your API in a cloned environment. You need to test the real API in an environment where you can control the load condition for aspect of the test.

That's where API virtualization comes in. A virtual API provides a sandbox environment where you can simulate environmental loads on network bandwidth, server and database connections, simultaneous users and more.

With API virtualization, minimal resources are required for standing up and testing the API. You test the API itself, not the end-to-end application with all its required backend systems.  This eliminates the need for a clone of the production environment. That means you don't have to reset and reload downstream data anymore.

load-testing-with-api-virtualization

The data-in and data-out can be as real as you wish. You can create your own requests and responses to test against, or capture and store actual requests and responses from a known source. The data can be played back during the load test, firing the requests as fast or as slow as you wish.

And those pay-per-use APIs? You can virtualize those, too, simulating a variety of responses and latency that the actual API could introduce.

The result? You can test your API in isolation, gauging the reaction against real data,  under a variety of load conditions. And you can fine tune it until it passes with flying colors. No production system required or harmed.

Don't give in to the strain
Hammering an API in production to simulate a heavy request loads isn't wise. You can end up angering - and losing - real customers, not to mention your job. Not to mention affecting countless downstream systems.

API Virtualization allows you to load test your API in isolation from the rest of the system. By configuring requests and responses - or capturing real ones - your API can respond to a variety of requests with any number of responses - good and bad. With complete control of the test conditions, you can simulate network load, maximum connection limits, latencies and other conditions that happen in the real world.

Knowing how your API will respond under stress allows you to fix it, tune it, optimize it - before you unleash it for your customers to use.

Read the original blog entry...

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

@DevOpsSummit Stories
In his opening keynote at 20th Cloud Expo, Michael Maximilien, Research Scientist, Architect, and Engineer at IBM, discussed the full potential of the cloud and social data requires artificial intelligence. By mixing Cloud Foundry and the rich set of Watson services, IBM's Bluemix is the best cloud operating system for enterprises today, providing rapid development and deployment of applications that can take advantage of the rich catalog of Watson services to help drive insights from the vast trove of private and public data available to enterprises.
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed.
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business - from apparel to energy - is being rewritten by software. From planning to development to management to security, CA creates software that fuels transformation for companies in the application economy. With CA software at the center of their IT strategy, organizations can leverage the technology that changes the way we live - from the data center to the mobile device. CA's software and solutions help customers thrive in the new application economy by delivering the means to deploy, monitor and secure their applications and infrastructure.
Given the popularity of the containers, further investment in the telco/cable industry is needed to transition existing VM-based solutions to containerized cloud native deployments. The networking architecture of the solution isolates the network traffic into different network planes (e.g., management, control, and media). This naturally makes support for multiple interfaces in container orchestration engines an indispensable requirement.
Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise...
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how to overcome the challenges of monitoring S3 and RDS. This presentation will provide an overview of necessary Amazon Lambda concepts and discus how to integrate the monitoring data with other tools.
SYS-CON Events announced today that Elastifile will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Elastifile Cloud File System (ECFS) is software-defined data infrastructure designed for seamless and efficient management of dynamic workloads across heterogeneous environments. Elastifile provides the architecture needed to optimize your hybrid cloud environment, by facilitating efficient data access across cloud and on-premises boundaries - with all the advantages of public IaaS everywhere.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
There is only one world-class Cloud event on earth, and that is Cloud Expo – which returns to Silicon Valley for the 21st Cloud Expo at the Santa Clara Convention Center, October 31 - November 2, 2017. Every Global 2000 enterprise in the world is now integrating cloud computing in some form into its IT development and operations. Midsize and small businesses are also migrating to the cloud in increasing numbers. Companies are each developing their unique mix of cloud technologies and services, forming multi-cloud and hybrid cloud architectures and deployments across all major industries. Cloud-driven thinking has become the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, and the public sector.
SYS-CON Events announced today that Golden Gate University will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Since 1901, non-profit Golden Gate University (GGU) has been helping adults achieve their professional goals by providing high quality, practice-based undergraduate and graduate educational programs in law, taxation, business and related professions. Many of its courses are taught by faculty actively working in their field of expertise, providing students with skills that can be applied immediately. The new MS in Business Analytics, like most of its programs, is available fully online or in-person in downtown SF.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
SYS-CON Events announced today that DXWorldExpo has been named “Global Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Digital Transformation is the key issue driving the global enterprise IT business. Digital Transformation is most prominent among Global 2000 enterprises and government institutions.
SYS-CON Events announced today that Grape Up will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Grape Up is a software company specializing in cloud native application development and professional services related to Cloud Foundry PaaS. With five expert teams that operate in various sectors of the market across the U.S. and Europe, Grape Up works with a variety of customers from emerging startups to Fortune 1000 companies.
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate communication between multiple instances by using an additional message broker.
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service.
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Vulnerability management is vital for large companies that need to secure containers across thousands of hosts, but many struggle to understand how exposed they are when they discover a new high security vulnerability. In his session at 21st Cloud Expo, John Morello, CTO of Twistlock, will address this pressing concern by introducing the concept of the “Vulnerability Risk Tree API,” which brings all the data together in a simple REST endpoint, allowing companies to easily grasp the severity of the vulnerability. He will provide attendees with actionable advice related to understanding and acting on exposure due to new high severity vulnerabilities.
"With Digital Experience Monitoring what used to be a simple visit to a web page has exploded into app on phones, data from social media feeds, competitive benchmarking - these are all components that are only available because of some type of digital asset," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Secure Channels, a cybersecurity firm, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Secure Channels, Inc. offers several products and solutions to its many clients, helping them protect critical data from being compromised and access to computer networks from the unauthorized. The company develops comprehensive data encryption security strategies that are tailored for the unique needs of each client; the team builds in an intuitive user experience to boost efficiency and effectiveness of its cyber security solutions.
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are moved from one stage to the next only after they have been tested and approved to continue. New code submitted to the repository is tested upon commit. When tests fail, the code is rejected. Subsystems are approved as part of periodic builds on their way to the delivery stage, where the system is being tested as production ready. The release process stops when tests fail. The key is to shift test c...
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, discussed solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool. He also demonstrated deployment of a CI/CD pipeline using container management, as well as show how to deploy a containerized application through a continuous delivery pipeline.
SYS-CON Events announced today that App2Cloud will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct. 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. App2Cloud is an online Platform, specializing in migrating legacy applications to any Cloud Providers (AWS, Azure, Google Cloud).
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean separation between infrastructure, platform, and service concerns.