Welcome!

@DevOpsSummit Authors: Elizabeth White, Yeshim Deniz, Liz McMillan, Pat Romanski, Aruna Ravichandran

Related Topics: @DevOpsSummit, Microservices Expo

@DevOpsSummit: Blog Feed Post

Four Ways to Load Test Your API | @DevOpsSummit #API #DevOps #Microservices

How do you make sure your API can handle even the heaviest load?

By Les Worley

You know the feeling. You're surfing along and then BAM! you get the dreaded

503 (Service unavailable)

The server is currently unavailable (because it is overloaded or

down for maintenance).

Service-Unavailable

Do you remember what you were searching for the last time that happened? Or what company you would have bought from? Probably not.

Worse, if you had just clicked on "Submit Order" and waited and waited... and then got such an error... Did you have any idea whether the order went through, or if your credit card was charged??

Chances are that merchant lost you as a customer forever. You Google'd for someone else and never looked back.

Of course, as a developer you want your service to be heavily used. Otherwise, what's the point? But loads depend on many things: time of day, end of month processing, overnight batch runs, press releases, product launches - you name it.

So how do you make sure your API can handle even the heaviest load? How do you keep your customers from walking away?

Don't Lose Authority from an Unreliable API
That's one example - from a consumer point of view - of an API service that couldn't handle the load. It resulted in a lost customer.

When you publish your web service, you want to be the one that everyone looks to as the authority - especially if they pay to use the API.

Perhaps your API is just for "internal use." But there's always someone that relies on the results. Maybe the web team uses your API to search your company's product catalog. They then display the results to your customers. Busy server, no results - no sale. Maybe your API is not so internal after all.

Either way, the last thing you want is for your users to be screaming about an error like

  • Service unavailable
  • Connection rejected
  • Server timed out
  • Or "Unknown error occurred"

If you're the consumer of a service, like a financial transaction API, you want your app to handle any exception that API throws. But as the API publisher, it's up to you to make sure your API can handle heavy loads so it doesn't throw those exceptions. Or if it does, it does so gracefully and in a well-defined and documented manner.

If you can't guarantee your users predictability under heavy load, you'll lose that authority. They'll pay someone else for a more reliable API.

In other words, you need to test the heck out of it.

There are three typical approaches to load testing an API. These, as well as various hybrids, each have drawbacks.

#1 Faking it - API Mocking
The least useful for load testing are API mocks. These are basically temporary placeholders used during development of an API. Developers use these for early unit testing. But they also provide these "dummy" versions to other teams that need to call the unfinished API from within their own code.

Mocks return hard-coded responses, the formats of which often change during development. They have little "meat" behind them, only canned responses that can be served up immediately. And they are disposable - once the actual API is completed, the code is thrown away. In other words, the effort to create the mocks is wasted after their initial use.

And because mocks aren't yet hooked up to actual data sources, their performance isn't at all representative of the real world.

In summary, mocks may be useful for unit testing, but they aren't representative of the real world.

#2 Cloning It - A Full Test Environment
The mock API approach assumes the API isn't actually complete, or that there's no live data source available. If the API is complete and ready for testing, we can use the clone approach. This approach stands up a full-blown test environment using a snapshot of production data.

disadvantages-of-cloned-environments

Often you'll have a copy of all the applications running, too, because your code relies on other APIs, not just database queries.

The good in this approach is that it's very representative of the live system's performance. If your load test brings the system to its knees, no problem! It hurts no one.

The bad parts outweigh the good though. For one thing, production data often contains sensitive customer information, so data privacy regulations can come into play. Also, if your own API makes use of pay-per-use services, this testing can get very expensive. Finally, production data is stateful, even in a cloned environment. If you need to re-run the tests, you'll have to reload the test bed each time.

In summary, cloned environments give you near-production-quality load test results, without harming production if your test causes it to crater. However, they raise privacy concerns, require constant data reloading, and can be very expensive if calling third party APIs.

#3 Hitting production - Load Testing Live APIs
Sadly, the method so many companies use is to test on production itself. Why? Well, it's live, so it's the most representative of all methods. It's also "easy" -  no separate test environment to stand up and maintain. But that's where the usefulness ends.

hitting_production_is_easiest_but

In production, you can't use the "real" data. Instead you have to maintain separate "test accounts." Using test accounts in a live environment reduces privacy concerns to a degree. These data are stateful, of course, so you have to reset them before each test run. And you still have to worry about the cost of pay-per-use APIs.

It's important to schedule your load test "off hours." This can be tricky, since the Internet allows your customers access 24/7. Even then, if your load test brings the server down, or corrupts production data, you're fired (or you'll soon wish you were).

Remember, Service unavailable signals your customers and their customers that your site - or your API service - is unreliable.

#4 API Virtualization - The Risk-free Approach
You want to perform an exhaustive load test of your API - and keep your existing customers and your job. That rules out hitting the production system.

You also need more than a test version of your API in a cloned environment. You need to test the real API in an environment where you can control the load condition for aspect of the test.

That's where API virtualization comes in. A virtual API provides a sandbox environment where you can simulate environmental loads on network bandwidth, server and database connections, simultaneous users and more.

With API virtualization, minimal resources are required for standing up and testing the API. You test the API itself, not the end-to-end application with all its required backend systems.  This eliminates the need for a clone of the production environment. That means you don't have to reset and reload downstream data anymore.

load-testing-with-api-virtualization

The data-in and data-out can be as real as you wish. You can create your own requests and responses to test against, or capture and store actual requests and responses from a known source. The data can be played back during the load test, firing the requests as fast or as slow as you wish.

And those pay-per-use APIs? You can virtualize those, too, simulating a variety of responses and latency that the actual API could introduce.

The result? You can test your API in isolation, gauging the reaction against real data,  under a variety of load conditions. And you can fine tune it until it passes with flying colors. No production system required or harmed.

Don't give in to the strain
Hammering an API in production to simulate a heavy request loads isn't wise. You can end up angering - and losing - real customers, not to mention your job. Not to mention affecting countless downstream systems.

API Virtualization allows you to load test your API in isolation from the rest of the system. By configuring requests and responses - or capturing real ones - your API can respond to a variety of requests with any number of responses - good and bad. With complete control of the test conditions, you can simulate network load, maximum connection limits, latencies and other conditions that happen in the real world.

Knowing how your API will respond under stress allows you to fix it, tune it, optimize it - before you unleash it for your customers to use.

Read the original blog entry...

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

@DevOpsSummit Stories
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
In his session at 21st Cloud Expo, Michael Burley, a Senior Business Development Executive in IT Services at NetApp, will describe how NetApp designed a three-year program of work to migrate 25PB of a major telco's enterprise data to a new STaaS platform, and then secured a long-term contract to manage and operate the platform. This significant program blended the best of NetApp’s solutions and services capabilities to enable this telco’s successful adoption of private cloud storage and launching of virtual storage services to its enterprise market.
Join IBM November 1 at 21st Cloud Expo at the Santa Clara Convention Center in Santa Clara, CA, and learn how IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Cognitive analysis impacts today’s systems with unparalleled ability that were previously available only to manned, back-end operations. Thanks to cloud processing, IBM Watson can bring cognitive services and AI to intelligent, unmanned systems. Imagine a robot vacuum that becomes your personal assistant that knows everything and can respond to your emotions and verbal commands!
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous architectural and coordination work to minimize the volatility of the cloud environment and leverage the security features of the cloud to the benefit of the CICD pipeline.
In his Opening Keynote at 21st Cloud Expo, John Considine, General Manager of IBM Cloud Infrastructure, will lead you through the exciting evolution of the cloud. He'll look at this major disruption from the perspective of technology, business models, and what this means for enterprises of all sizes. John Considine is General Manager of Cloud Infrastructure Services at IBM. In that role he is responsible for leading IBM’s public cloud infrastructure including strategy, development, and offering management. To date, IBM has launched more than 50 cloud data centers that span the globe. He has been building advanced technology, delivering “as a service” solutions, and managing infrastructure services for the past 20 years.
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing, CA Technologies. "It's this results-driven combination of technology and business that makes me so passionate about DevOps and its future in the industry. I am truly honored to take on this co-chair role, and look forward to working with the DevOps Summit team at Cloud Expo and attendees to advance DevOps."
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations and optimization to employee training and insights, all ultimately create the best customer experience both online and in-store.
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual business failure.
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
SYS-CON Events announced today that mruby Forum will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. mruby is the lightweight implementation of the Ruby language. We introduce mruby and the mruby IoT framework that enhances development productivity. For more information, visit http://forum.mruby.org/.
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp empowers global organizations to unleash the full potential of their data to expand customer touchpoints, foster greater innovation and optimize their operations.
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbuilding of data centers to house increasing amounts of storage infrastructure.
Most technology leaders, contemporary and from the hardware era, are reshaping their businesses to do software. They hope to capture value from emerging technologies such as IoT, SDN, and AI. Ultimately, irrespective of the vertical, it is about deriving value from independent software applications participating in an ecosystem as one comprehensive solution. In his session at @ThingsExpo, Kausik Sridhar, founder and CTO of Pulzze Systems, will discuss how given the magnitude of today's application ecosystem, tweaking existing software to stitch various components together leads to sub-optimal solutions. This definitely deserves a re-think, and paves the way for a new breed of lightweight application servers that are micro-services and DevOps ready!
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making real-time decisions based on a combination of real user monitoring, synthetic testing, APM, NGINX / local load balancers, and other data sources, is critical.
In a recent survey, Sumo Logic surveyed 1,500 customers who employ cloud services such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). According to the survey, a quarter of the respondents have already deployed Docker containers and nearly as many (23 percent) are employing the AWS Lambda serverless computing framework. It’s clear: serverless is here to stay. The adoption does come with some needed changes, within both application development and operations. That means serverless is also changing the way we leverage public clouds. Truth-be-told, many enterprise IT shops were so happy to get out of the management of physical servers within a data center that many limitations of the existing public IaaS clouds were forgiven. However, now that we’ve lived a few years with public IaaS clouds, developers and CloudOps pros are giving a huge thumbs down to the ...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. This leads to a waste of cloud resources and increased operational overhead.
SYS-CON Events announced today that Taica will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TAZMO technology and development capabilities in the semiconductor and LCD-related manufacturing fields are among the best worldwide. For more information, visit https://www.tazmo.co.jp/en/.
SYS-CON Events announced today that TidalScale will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their associated CPUs, memory storage and network) into one or more large servers capable of handling the biggest Big Data problems and most unpredictable workloads.
SYS-CON Events announced today that Avere Systems, a leading provider of hybrid cloud enablement solutions, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere Systems was created by file systems experts determined to reinvent storage by changing the way enterprises thought about and bought storage resources. With decades of experience behind the company’s founders, Avere got its start in 2008 with a mission to use fast, flash-based storage in the most efficient, effective manner possible. What the team had discovered was a technology that optimized storage resources and reduced dependencies on sprawling storage installations. Launched as the Avere OS, this advanced file system not only boosted performance within standard, on-premises, network-attached storage systems but ...
Microsoft Azure Container Services can be used for container deployment in a variety of ways including support for Orchestrators like Kubernetes, Docker Swarm and Mesos. However, the abstraction for app development that support application self-healing, scaling and so on may not be at the right level. Helm and Draft makes this a lot easier. In this primarily demo-driven session at @DevOpsSummit at 21st Cloud Expo, Raghavan "Rags" Srinivas, a Cloud Solutions Architect/Evangelist at Microsoft, will cover Docker Swarm and Kubernetes deployments on Azure with some simple examples. He will look at Helm and Draft and how they can simplify app development significantly, like app scaling, rollback, etc. Helm is a tool that streamlines installing and managing Kubernetes applications, like the apt/yum/homebrew for Kubernetes. Draft works with pre-provided charts to deploy the apps via Helm.
Containers are rapidly finding their way into enterprise data centers, but change is difficult. How do enterprises transform their architecture with technologies like containers without losing the reliable components of their current solutions? In his session at @DevOpsSummit at 21st Cloud Expo, Tony Campbell, Director, Educational Services at CoreOS, will explore the challenges organizations are facing today as they move to containers and go over how Kubernetes applications can deploy with legacy components, and also go over automated capabilities provided by operators to auto-update Kubernetes with zero downtime for current and secure deployments.
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements
SYS-CON Events announced today that Ryobi Systems will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ryobi Systems Co., Ltd., as an information service company, specialized in business support for local governments and medical industry. We are challenging to achive the precision farming with AI. For more information, visit http://www.ryobi-sol.co.jp/en/.
As you move to the cloud, your network should be efficient, secure, and easy to manage. An enterprise adopting a hybrid or public cloud needs systems and tools that provide: Agility: ability to deliver applications and services faster, even in complex hybrid environments Easier manageability: enable reliable connectivity with complete oversight as the data center network evolves Greater efficiency: eliminate wasted effort while reducing errors and optimize asset utilization Security: implement always-vigilant DNS security