Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog, Dana Gardner

Related Topics: @DevOpsSummit, Microservices Expo

@DevOpsSummit: Blog Feed Post

Load-Balancing Microservices | @DevOpsSummit #DevOps #API #Microservices

A great number are hands-on guidelines regarding how microservices can be scaled

How to Load-Balance Microservices at Web-Scale

By Martin Goodwell

There’s no shortage of guides and blog posts available to provide you with best practices in architecting microservices. While all this information is helpful, what doesn’t seem to be available in such a great number are hands-on guidelines regarding how microservices can be scaled. Following a little research and sifting through lots of theoretical discussion, here is how load-balancing microservices is done in practice by the big players.

Living on the edge

When a web application frontend client communicates with a microservices-based backend server, does the frontend need to know about all the microservice instances that are available to it? For example, does a client really need to be aware of all five services that deliver web page data? The answer is a resounding NO!

Sudhir Tonse, who worked with Netflix previously and now works for Uber, talks about the concept of edge services in his talk on Scalable Microservices at Netflix. An edge service serves as a gateway to a microservices infrastructure. So, in regards to the question of which microservices a frontend client needs to know about, following Sudhir’s approach, each client only communicates directly with just a single edge service. There can be one dedicated edge service per client. For example, Netflix serves more than a thousand device types—and each device type has its own dedicated edge service that serves as its single entry point.

 

Load balanced edge services act as gateways to microservice environments

 

Big players like Netflix and Riot Games, both of which run on Amazon AWS, utilize Elastic Load Balancers (ELB) to ensure that their edge services are available at all times.

Beyond the edge service

Each incoming request is analyzed. Multiple fan-out requests are then issued to the microservices that form the ecosystem. A single inbound request results in an average of about ten fan-out requests. The nearly two-billion requests that Netflix receives each day result in roughly 20 billion internal API calls.

 

Fan-out requests in a microservice environment

 

How does Netflix ensure that its microservices can handle such load and remain available 24/7? Again, load-balancing is the solution. But this time, it’s not by means of ELBs. With 500 different microservices, you’d need to configure about 500 ELBs! This is why Netflix’s tools come with built-in load-balancing capabilities. Netflix has created numerous libraries and tools that integrate easily with one another. By integrating the required libraries directly into each microservice, it is able to register itself with all managing services.

Fear not the edge service

With edge services being so important, you’re screwed if your edge service fails, right? Actually, no. For one, edge services absolutely must be load-balanced. This means that your visitors likely won’t even notice edge service outages. And besides, what’s the alternative? With monolithic-application environments, each service is like an edge service, so an outage of any central service—in the absence of a load-balancer—implies total outage, right?

Nonetheless, it’s true that edge services are amongst the most delicate of services and therefore do require special attention.

The takeaway

You should seriously consider running edge services to handle your inbound traffic. And definitely load-balance your edge services with whatever mechanism is provided by your cloud provider. All internal traffic should be handled by your own tools as this allows you to run your environment with minimal configuration overhead. So, ultimately, the most important tool required for effective scaling in microservices is, not surprisingly, load balancing.

Stay tuned

One of my next posts deals with the question of containerization. And be aware, Docker isn’t the only solution that builds on the concepts of containers.

The post How to load-balance microservices at web-scale appeared first on #monitoringlife.

More Stories By Dynatrace Blog

Building a revolutionary approach to software performance monitoring takes an extraordinary team. With decades of combined experience and an impressive history of disruptive innovation, that’s exactly what we ruxit has.

Get to know ruxit, and get to know the future of data analytics.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.