Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Roger Strukhoff, Liz McMillan, Stackify Blog, Pat Romanski

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog

@DevOpsSummit: Blog Post

Microservices Total Cost of Ownership: Too Soon? By @Aruna13 | @DevOpsSummit #DevOps #Docker #Containers #Microservices

Successfully executing on the microservices model will require more than just adding a new set of development disciplines

Microservices are hot. And for good reason. To compete in today's fast-moving application economy, it makes sense to break large, monolithic applications down into discrete functional units. Such an approach makes it easier to update and add functionalities (text-messaging a customer, calculating sales tax for a specific geography, etc.) and get those updates / adds into production fast. In fact, some would argue that microservices are a prerequisite for true continuous delivery.

But is it too soon to talk about keeping microservices lifecycle costs under control?

Thinking ahead
It is not too soon at all. In fact, history clearly tells us it's smart to think about microservices total cost of ownership (TCO) now. The introduction of PCs into the enterprise, for example, was extremely beneficial. Yet we soon discovered that it cost us more to operate distributed environments than we had anticipated. As a result, many organizations gave back a good piece of their economic gains as they struggled with TCO for years.

Server virtualization, too, has delivered substantial benefits by enabling us to make better use of hardware, respond more adaptively to demand, and streamline DR. But honest CIOs will admit that they were also blindsided by issues around administration, monitoring and sprawl.

The microservices model is likely to follow this same pattern. Yes, organizations will benefit significantly from microservices - especially in the containerization. However, realistic CIOs will recognize that it must cost IT something to own a large number of app services, rather than a relatively small number of monolithic applications.

These complexity-related costs will likely include:

  • Maintaining an up-to-date microservices catalog so that DevOps teams know exactly what is available to leverage-and who to contact with questions
  • Code promotion traffic that is an order of magnitude higher as releases into production multiply due to a large number of microservices being continuously updated
  • Extremely high-frequency test/QA activity to rigorously safeguard both the quality of each microservice and the multitude of "micro-calls" between microservices via multiple tests, including functional, performance/load and user acceptance testing
  • Safeguarding performance in production for a large number of discrete microservices - each of which have their own unique infrastructure dependencies
  • Securing and enforcing compliance for a large number of discrete microservices - each of which touch different data sets with different methods
  • Fragmentation of the people and teams that have to work together in order to keep the environment running smoothly and advancing at a good, fast clip

Successfully executing on the microservices model will require more than just adding a new set of development disciplines. It will also require rethinking - and perhaps even a retooling - of end-to-end DevOps management.

Incremental costs are non-trivial
There is, of course, a common tendency to stay in denial about complexity-related costs early in the hype-and-adoption process. That's because the gains look so attractive, and it can take a lot of work to achieve them. So IT leaders can be tempted to cross the complexity bridge when they come to it.

But I'd advise against that attitude. Microservices initiatives will get bogged down if they become too resource-intensive. And once you have inefficient practices in places, it's hard to displace them with more efficient ones.

If you're moving to microservices, give plenty of thought to how you can meet your new operational challenges effectively and efficiently. Because microservices is not just a dev technique. It's a whole new way of delivering value in the application economy.

More Stories By Aruna Ravichandran

Aruna Ravichandran has over 20 years of experience in building and marketing products in various markets such as IT Operations Management (APM, Infrastructure management, Service Management, Cloud Management, Analytics, Log Management, and Data Center Infrastructure Management), Continuous Delivery, Test Automation, Security and SDN. In her current role, she leads the product and solutions marketing, strategy, market segmentation, messaging, positioning, competitive and sales enablement across CA's DevOps portfolio.

Prior to CA, Aruna worked at Juniper Networks and Hewlett Packard where-in she led executive leadership roles in marketing and engineering.

Aruna is co-author of the book, "DevOps for Digital Leaders", which was published in 2016 and was named one of Top 100 The Most Influential Women in Silicon Valley by the San Jose Business Journal as well as 2016 Most Powerful and Influential Woman Award by the National Diversity Council.

Aruna holds a Masters in Computer Engineering and a MBA from Santa Clara University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
Docker and Kubernetes are key elements of modern cloud native deployment automations. After building your microservices, common practice is to create docker images and create YAML files to automate the deployment with Docker and Kubernetes. Writing these YAMLs, Dockerfile descriptors are really painful and error prone.Ballerina is a new cloud-native programing language which understands the architecture around it - the compiler is environment aware of microservices directly deployable into infrastructures like Docker and Kubernetes.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility. As they do so, IT professionals are also embracing the reality of Serverless architectures, which are critical to developing and operating real-time applications and services. Serverless is particularly important as enterprises of all sizes develop and deploy Internet of Things (IoT) initiatives.
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and controlling infrastructure. The rise of Site Reliability Engineering (SRE) is part of that redefinition of operations vs development roles in organizations.
When a company wants to develop an application, it must worry about many aspects: selecting the infrastructure, building the technical stack, defining the storage strategy, configuring networks, setting up monitoring and logging, and on top of that, the company needs to worry about high availability, flexibility, scalability, data processing, machine learning, etc. Going to the cloud infrastructure can help you solving these problems to a level, but what if we have a better way to do things. As a pioneer in serverless notion, Google Cloud offers a complete platform for each of those necessities letting users to just write code, send messages, assign jobs, build models, and gain insights without deploying a single machine. So cloud compute on its own is not enough, we need to think about all of the pieces we need to move architecture from the bottom, up towards the top of the stack. Wi...