Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Yeshim Deniz, Elizabeth White, Pat Romanski, Liz McMillan

Article

Microservices design: Get scale, availability right

Microservices design

microservices-planning-scale-availability

At Electric Cloud, we’re seeing an almost exponential growth in our customers’ interest in microservices and containers, even though we’re still in the early days of their use.

The promise of microservices is that you can divide and conquer the problem of a large application by breaking it down into its constituent services and what each one actually accomplishes. Each can be supported by an independent team. You get to the point where you can break the limits on productivity that Fred Brooks described in his book, The Mythical Man-month.

Aside from being able to throw more people at the problem and—unlike what Brooks observed—actually become more efficient once you get a microservices-based application into production, you can quickly start thinking about how to scale it. Think resiliency and high-availability. And you can easily determine what services don’t need scaling, or high availability.

These things become easier than with a large, monolithic application, because each microservice can scale in its own way. Here are my insights about these variables, and the decisions you may face in designing your own microservices platform.

How do you know when to scale a microservice?

The nice thing about running in a microservices environment is that you don’t have to scale everything in order to scale something. You may have some services in your application that don’t need to scale at all. They may be fine running as a single service (or as a dual instance service, simply for failover). On the other hand, you may have many services that really do need to scale, to the tune of a dozen or hundreds of instances of a particular service.

For example, your shopping website has many features you want to present to users as they are browsing and shopping. You probably have a recommendation service, which provides options that might attract users according to their search terms or current web page. Now, you can make that recommendation service part of your entire monolithic application, or you can break it off into its own service. It’s the same with the shopping cart service and the avatar service that may show users a pre-selected image of themselves when they log into their accounts.

But when you think about the work that the avatar service must do, it’s not much. It must search through a repository of images, and return a particular user’s image. This is a well-understood requirement that doesn’t need to scale.

On the other hand, the recommendation service is going to be more complex, and fairly heavy-weight. Each user session presents a new set of variables. What is similar when they search for product A? What do they end up buying when they search for product B? There’s much more data involved, more querying, all part of a more compute-intensive capability. This is very different, from a scaling perspective, when compared to a tiny avatar service that simply hands out a JPEG file every time that’s requested during a user session.

Give every service its own level of availability

With microservices, you don’t need the same availability for each service. Not everyone will talk about this, because you may not want to say to your team “service A doesn’t have to be as available as service B.” But at the end of the day, the shopping cart had better be highly available, or your customer won’t purchase anything. But if the avatar service suddenly isn’t available, and instead shows a blank box, customers probably are either going to buy fewer things or leave your site altogether.

The point is, you can have different requirements for your services in terms of uptime, scalability, delivery frequency, and more.

Use purpose-fit technology stacks

There are many things that are good, and interesting, about microservices. As separate services, they only communicate over the network,  which means that you can use completely different technology stacks to support each one.

You can determine what’s fit for purpose using a key-value store, for example, if that’s important. If you’re using a relational database for your shopping cart service, you’re probably doing credit card authorization, which involves a bit more of a technical stack than if you’re doing recommendations based on a big data analytics engine.

Think how that compares when you use a monolith. You need to make a lot of compromises. You’re going to have to pick one technology stack that works for every problem you have to solve, which means it’s difficult for organizations to adopt and use new technologies. If I need to revise some feature within my monolith, I’m not going to rewrite the whole thing just so that I can use some cool new framework. But with a microservices architecture, that isn’t an issue.

The right size, complexity for a microservice is…

Whey I get asked what I think is the right size or level of complexity for a microservice, as I often do, I tell them that, as a general rule of thumb, you should be able to have a small team rebuild it, from scratch, in a few weeks. That means doing it within one sprint, or maybe two at most. With that focus, you can adopt new technologies to replace or augment parts of your application architecture.

In the monolith scenario, you’re much more constrained in your technology choices, simply because one size must fit all. But when you break down the problem into more fundamentally independent pieces, you can use the technologies that best fit the service.

That flexibility is a great benefit.

So are you ready to design your microservices?

 

http://cdn.electric-cloud.com/wp-content/uploads/2015/05/TechBeacon-Logo... 115w, http://cdn.electric-cloud.com/wp-content/uploads/2015/05/TechBeacon-Logo... 230w, http://cdn.electric-cloud.com/wp-content/uploads/2015/05/TechBeacon-Logo... 100w" height="43" width="139" alt="TechBeacon Logo" src="http://cdn.electric-cloud.com/wp-content/uploads/2015/05/TechBeacon-Logo.jpg" class=" wp-image-20545 alignleft" />This article originally appeared on TechBeacon.

More Stories By Anders Wallgren

Anders Wallgren is Chief Technology Officer of Electric Cloud. Anders brings with him over 25 years of in-depth experience designing and building commercial software. Prior to joining Electric Cloud, Anders held executive positions at Aceva, Archistra, and Impresse. Anders also held management positions at Macromedia (MACR), Common Ground Software and Verity (VRTY), where he played critical technical leadership roles in delivering award winning technologies such as Macromedia’s Director 7 and various Shockwave products.

@DevOpsSummit Stories
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
This session will provide an introduction to Cloud driven quality and transformation and highlight the key features that comprise it. A perspective on the cloud transformation lifecycle, transformation levers, and transformation framework will be shared. At Cognizant, we have developed a transformation strategy to enable the migration of business critical workloads to cloud environments. The strategy encompasses a set of transformation levers across the cloud transformation lifecycle to enhance process quality, compliance with organizational policies and implementation of information security and data privacy best practices. These transformation levers cover core areas such as Cloud Assessment, Governance, Assurance, Security and Performance Management. The transformation framework presented during this session will guide corporate clients in the implementation of a successful cloud solu...
So the dumpster is on fire. Again. The site's down. Your boss's face is an ever-deepening purple. And you begin debating whether you should join the #incident channel or call an ambulance to deal with his impending stroke. Yes, we know this is a developer's fault. There's plenty of time for blame later. Postmortems have a macabre name because they were once intended to be Viking-like funerals for someone's job. But we're civilized now. Sort of. So we call them post-incident reviews. Fires are never going to stop. We're human. We miss bugs. Or we fat finger a command - deleting dozens of servers and bringing down S3 in US-EAST-1 for hours - effectively halting the internet. These things happen.
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert research and analysis.Attendees can join this session to better understand how DevSecOps teams are applying lessons from W. Edwards Deming (circa 1982), Malcolm Goldrath (circa 1984) and Gene Kim (circa 2013) to improve their ability to respond to new business requirements and cyber risks.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.