Welcome!

@DevOpsSummit Authors: Dalibor Siroky, Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog

@DevOpsSummit: Blog Post

Microservices Total Cost of Ownership: Too Soon? By @Aruna13 | @DevOpsSummit #DevOps #Docker #Containers #Microservices

Successfully executing on the microservices model will require more than just adding a new set of development disciplines

Microservices are hot. And for good reason. To compete in today's fast-moving application economy, it makes sense to break large, monolithic applications down into discrete functional units. Such an approach makes it easier to update and add functionalities (text-messaging a customer, calculating sales tax for a specific geography, etc.) and get those updates / adds into production fast. In fact, some would argue that microservices are a prerequisite for true continuous delivery.

But is it too soon to talk about keeping microservices lifecycle costs under control?

Thinking ahead
It is not too soon at all. In fact, history clearly tells us it's smart to think about microservices total cost of ownership (TCO) now. The introduction of PCs into the enterprise, for example, was extremely beneficial. Yet we soon discovered that it cost us more to operate distributed environments than we had anticipated. As a result, many organizations gave back a good piece of their economic gains as they struggled with TCO for years.

Server virtualization, too, has delivered substantial benefits by enabling us to make better use of hardware, respond more adaptively to demand, and streamline DR. But honest CIOs will admit that they were also blindsided by issues around administration, monitoring and sprawl.

The microservices model is likely to follow this same pattern. Yes, organizations will benefit significantly from microservices - especially in the containerization. However, realistic CIOs will recognize that it must cost IT something to own a large number of app services, rather than a relatively small number of monolithic applications.

These complexity-related costs will likely include:

  • Maintaining an up-to-date microservices catalog so that DevOps teams know exactly what is available to leverage-and who to contact with questions
  • Code promotion traffic that is an order of magnitude higher as releases into production multiply due to a large number of microservices being continuously updated
  • Extremely high-frequency test/QA activity to rigorously safeguard both the quality of each microservice and the multitude of "micro-calls" between microservices via multiple tests, including functional, performance/load and user acceptance testing
  • Safeguarding performance in production for a large number of discrete microservices - each of which have their own unique infrastructure dependencies
  • Securing and enforcing compliance for a large number of discrete microservices - each of which touch different data sets with different methods
  • Fragmentation of the people and teams that have to work together in order to keep the environment running smoothly and advancing at a good, fast clip

Successfully executing on the microservices model will require more than just adding a new set of development disciplines. It will also require rethinking - and perhaps even a retooling - of end-to-end DevOps management.

Incremental costs are non-trivial
There is, of course, a common tendency to stay in denial about complexity-related costs early in the hype-and-adoption process. That's because the gains look so attractive, and it can take a lot of work to achieve them. So IT leaders can be tempted to cross the complexity bridge when they come to it.

But I'd advise against that attitude. Microservices initiatives will get bogged down if they become too resource-intensive. And once you have inefficient practices in places, it's hard to displace them with more efficient ones.

If you're moving to microservices, give plenty of thought to how you can meet your new operational challenges effectively and efficiently. Because microservices is not just a dev technique. It's a whole new way of delivering value in the application economy.

More Stories By Aruna Ravichandran

Aruna Ravichandran has over 20 years of experience in building and marketing products in various markets such as IT Operations Management (APM, Infrastructure management, Service Management, Cloud Management, Analytics, Log Management, and Data Center Infrastructure Management), Continuous Delivery, Test Automation, Security and SDN. In her current role, she leads the product and solutions marketing, strategy, market segmentation, messaging, positioning, competitive and sales enablement across CA's DevOps portfolio.

Prior to CA, Aruna worked at Juniper Networks and Hewlett Packard where-in she led executive leadership roles in marketing and engineering.

Aruna is co-author of the book, "DevOps for Digital Leaders", which was published in 2016 and was named one of Top 100 The Most Influential Women in Silicon Valley by the San Jose Business Journal as well as 2016 Most Powerful and Influential Woman Award by the National Diversity Council.

Aruna holds a Masters in Computer Engineering and a MBA from Santa Clara University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.