Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo

@DevOpsSummit: Blog Feed Post

Monitoring Your APIs | @DevOpsSummit #API #DevOps #Microservices

Gone are the days when SOAP, REST and microservices were buzzwords that did not apply to you

Why You Should Be Monitoring Your APIs
by Priyanka Tiwari

Gone are the days when SOAP, REST and microservices were buzzwords that did not apply to you.

Gone are also the days when monolithic applications were built by organizations end-to-end.

The availability of specialized components and services from different organizations and the access to these components through RESTful services has resulted in a major inflexion point with APIs becoming the mechanism for data interchange and systems to interact.

Most organizations are familiar with the benefits of developing APIs that can be shared publicly and used by other third-party applications. These benefits include:

  • New acquisition channel: API driven channel can provide 26% more CLTV than other channels.
  • Upselling: Create demand for high-priced features by making them available through an API.
  • Affiliate marketing: Turn 3rd party developers into affiliates of your app or become an affiliate yourself.
  • Distribution: Enable 3rd party apps to get your content or products, grow with them.

But public or external APIs are just one part of the API ecosystem. In fact, when SmartBear software surveyed more than 2,300 software professionals about how their organizations use APIs, 73% said they developed both internal and external APIs.

API Audiences - Blog

Organizations across a wide variety of industries are currently developing APIs including: financial (70%), government (59%), transportation (54%), and healthcare (53%).

Even if you don't develop APIs, there's a good chance you rely on them to power your applications or accelerate internal projects.

When we asked why organizations are using/consuming APIs, we found that improving functionality, productivity, and efficiency were all top concerns:

  • 50% use APIs for interoperation between internal systems, tools, and teams
  • 49% use APIs to extend functionality of a product or service
  • 42% use APIs to reduce development time
  • 38% use APIs to reduce development cost

Why Consume APIs - Blog

As you strive to deliver world class websites, web applications, mobile and SaaS applications, it's critical to make sure the APIs that empower them are running smoothly.

It is a common conversation in planning meetings to talk about which APIs we need to expose to the outside world in order to drive business and, in some cases, to build APIs into the product roadmap. With so much emphasis on the tiny API, it's important to recognize its inherent power and make sure you build the appropriate safeguards to protect it.

If you are charged with building and supporting APIs in your group or you are embracing a service oriented architecture for the design and delivery of an entire app or set of apps, it will be required that you monitor these APIs at all times to ensure that the consumers of the API and it's services have access to it at all times.

It is also likely that you are consuming services provided by other groups or external partners to provide the capabilities in your app or apps. These need to be monitored as well.

API failures are often the most critical failures

In our newest infographic, How APIs Make or Break success in the Digital World, we look at three high profile examples of API failures:

  • January 2015: Facebook and Instagram servers went down for about an hour taking down Facebook and Instagram, and impacting a number of well-known website including Tinder and HipChat.
  • September 2015: Amazon Web Services experienced a brief disruption that caused an increase in faults for the EC2 Auto Scaling APIs.
  • January 2016: Twitter API experienced a worldwide outage that lasted more than an hour, impacting thousands of website and apps.

APIs issues are not unique to big name companies like Facebook and Amazon. Your application may run on an API from a smaller organization - do you really know how much work has been done to test the capacity of that API? Whether you're integrating with a third party API or developing APIs of your own, any disruption could impact your productivity and negatively impact your end users.

Or, as Arnie Leap, CIO, 1-800-FLOWERS explains:

"The performance of our tech stack is technology job number one for us at 1800flowers.com, Inc. Our customers trust that we will deliver smiles on time with the highest quality and integrity. Our enterprise API and services layers are held to extremely high levels of performance to help us achieve our customer experience goals. They are in the "middle" of everything we do!"

If you've made your API available to other developers, either in a controlled fashion to trusted partners or in a public way to anyone with a developer/production key, you take on a responsibility to ensure that nothing affects the API's performance.

You will have to worry about three areas when it comes to monitoring your APIs:

  • Availability: Is the service accessible at all time?
  • Functional Correctness: Is it responding with the correct payload made up of the correct constituent elements?
  • Performance: Is it responsive and coming back with the correct response in an acceptable time frame?

When setting up monitors for your APIs, it's important to consider that performance may differ if your APIs interact with multiple audiences - whether it's an internal, external, or public API. For example, internal APIs will typically perform faster than a public solution, where there are a lot of users coming to consume that data. So, you'll want to establish different standards and SLAs for each of those groups.

Once you've created your monitors and established your acceptable thresholds, you can set up alerts to be notified if performance degrades or the API goes offline. Choose the interval that you'd like to test the performance of the API while in production. This allows you to find problems and fix them before they impact your customers. You can also choose a variety of locations to test from, both inside and outside of your network. This will allow you to test your APIs live, in production, from the geographies where your customers are.

Getting started with API monitoring
When we asked software professionals about the tools they were using to deliver high quality APIs, we found that while a majority of teams are doing functional testing (71%), less than half are monitoring APIs while in production (48%).

Luckily, with an API monitoring tool like AlertSite, you can re-use the functional tests you have setup with a tool like SoapUI to create monitors for your APIs.

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.