Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Dalibor Siroky, Stackify Blog

Related Topics: @DevOpsSummit, Microservices Expo, @CloudExpo

@DevOpsSummit: Blog Feed Post

Monitoring Your APIs | @DevOpsSummit #API #DevOps #Microservices

Gone are the days when SOAP, REST and microservices were buzzwords that did not apply to you

Why You Should Be Monitoring Your APIs
by Priyanka Tiwari

Gone are the days when SOAP, REST and microservices were buzzwords that did not apply to you.

Gone are also the days when monolithic applications were built by organizations end-to-end.

The availability of specialized components and services from different organizations and the access to these components through RESTful services has resulted in a major inflexion point with APIs becoming the mechanism for data interchange and systems to interact.

Most organizations are familiar with the benefits of developing APIs that can be shared publicly and used by other third-party applications. These benefits include:

  • New acquisition channel: API driven channel can provide 26% more CLTV than other channels.
  • Upselling: Create demand for high-priced features by making them available through an API.
  • Affiliate marketing: Turn 3rd party developers into affiliates of your app or become an affiliate yourself.
  • Distribution: Enable 3rd party apps to get your content or products, grow with them.

But public or external APIs are just one part of the API ecosystem. In fact, when SmartBear software surveyed more than 2,300 software professionals about how their organizations use APIs, 73% said they developed both internal and external APIs.

API Audiences - Blog

Organizations across a wide variety of industries are currently developing APIs including: financial (70%), government (59%), transportation (54%), and healthcare (53%).

Even if you don't develop APIs, there's a good chance you rely on them to power your applications or accelerate internal projects.

When we asked why organizations are using/consuming APIs, we found that improving functionality, productivity, and efficiency were all top concerns:

  • 50% use APIs for interoperation between internal systems, tools, and teams
  • 49% use APIs to extend functionality of a product or service
  • 42% use APIs to reduce development time
  • 38% use APIs to reduce development cost

Why Consume APIs - Blog

As you strive to deliver world class websites, web applications, mobile and SaaS applications, it's critical to make sure the APIs that empower them are running smoothly.

It is a common conversation in planning meetings to talk about which APIs we need to expose to the outside world in order to drive business and, in some cases, to build APIs into the product roadmap. With so much emphasis on the tiny API, it's important to recognize its inherent power and make sure you build the appropriate safeguards to protect it.

If you are charged with building and supporting APIs in your group or you are embracing a service oriented architecture for the design and delivery of an entire app or set of apps, it will be required that you monitor these APIs at all times to ensure that the consumers of the API and it's services have access to it at all times.

It is also likely that you are consuming services provided by other groups or external partners to provide the capabilities in your app or apps. These need to be monitored as well.

API failures are often the most critical failures

In our newest infographic, How APIs Make or Break success in the Digital World, we look at three high profile examples of API failures:

  • January 2015: Facebook and Instagram servers went down for about an hour taking down Facebook and Instagram, and impacting a number of well-known website including Tinder and HipChat.
  • September 2015: Amazon Web Services experienced a brief disruption that caused an increase in faults for the EC2 Auto Scaling APIs.
  • January 2016: Twitter API experienced a worldwide outage that lasted more than an hour, impacting thousands of website and apps.

APIs issues are not unique to big name companies like Facebook and Amazon. Your application may run on an API from a smaller organization - do you really know how much work has been done to test the capacity of that API? Whether you're integrating with a third party API or developing APIs of your own, any disruption could impact your productivity and negatively impact your end users.

Or, as Arnie Leap, CIO, 1-800-FLOWERS explains:

"The performance of our tech stack is technology job number one for us at 1800flowers.com, Inc. Our customers trust that we will deliver smiles on time with the highest quality and integrity. Our enterprise API and services layers are held to extremely high levels of performance to help us achieve our customer experience goals. They are in the "middle" of everything we do!"

If you've made your API available to other developers, either in a controlled fashion to trusted partners or in a public way to anyone with a developer/production key, you take on a responsibility to ensure that nothing affects the API's performance.

You will have to worry about three areas when it comes to monitoring your APIs:

  • Availability: Is the service accessible at all time?
  • Functional Correctness: Is it responding with the correct payload made up of the correct constituent elements?
  • Performance: Is it responsive and coming back with the correct response in an acceptable time frame?

When setting up monitors for your APIs, it's important to consider that performance may differ if your APIs interact with multiple audiences - whether it's an internal, external, or public API. For example, internal APIs will typically perform faster than a public solution, where there are a lot of users coming to consume that data. So, you'll want to establish different standards and SLAs for each of those groups.

Once you've created your monitors and established your acceptable thresholds, you can set up alerts to be notified if performance degrades or the API goes offline. Choose the interval that you'd like to test the performance of the API while in production. This allows you to find problems and fix them before they impact your customers. You can also choose a variety of locations to test from, both inside and outside of your network. This will allow you to test your APIs live, in production, from the geographies where your customers are.

Getting started with API monitoring
When we asked software professionals about the tools they were using to deliver high quality APIs, we found that while a majority of teams are doing functional testing (71%), less than half are monitoring APIs while in production (48%).

Luckily, with an API monitoring tool like AlertSite, you can re-use the functional tests you have setup with a tool like SoapUI to create monitors for your APIs.

More Stories By SmartBear Blog

As the leader in software quality tools for the connected world, SmartBear supports more than two million software professionals and over 25,000 organizations in 90 countries that use its products to build and deliver the world’s greatest applications. With today’s applications deploying on mobile, Web, desktop, Internet of Things (IoT) or even embedded computing platforms, the connected nature of these applications through public and private APIs presents a unique set of challenges for developers, testers and operations teams. SmartBear's software quality tools assist with code review, functional and load testing, API readiness as well as performance monitoring of these modern applications.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today's digital transformation.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.