Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White, Liz McMillan

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog

@DevOpsSummit: Blog Post

Microservices Total Cost of Ownership: Too Soon? By @Aruna13 | @DevOpsSummit #DevOps #Docker #Containers #Microservices

Successfully executing on the microservices model will require more than just adding a new set of development disciplines

Microservices are hot. And for good reason. To compete in today's fast-moving application economy, it makes sense to break large, monolithic applications down into discrete functional units. Such an approach makes it easier to update and add functionalities (text-messaging a customer, calculating sales tax for a specific geography, etc.) and get those updates / adds into production fast. In fact, some would argue that microservices are a prerequisite for true continuous delivery.

But is it too soon to talk about keeping microservices lifecycle costs under control?

Thinking ahead
It is not too soon at all. In fact, history clearly tells us it's smart to think about microservices total cost of ownership (TCO) now. The introduction of PCs into the enterprise, for example, was extremely beneficial. Yet we soon discovered that it cost us more to operate distributed environments than we had anticipated. As a result, many organizations gave back a good piece of their economic gains as they struggled with TCO for years.

Server virtualization, too, has delivered substantial benefits by enabling us to make better use of hardware, respond more adaptively to demand, and streamline DR. But honest CIOs will admit that they were also blindsided by issues around administration, monitoring and sprawl.

The microservices model is likely to follow this same pattern. Yes, organizations will benefit significantly from microservices - especially in the containerization. However, realistic CIOs will recognize that it must cost IT something to own a large number of app services, rather than a relatively small number of monolithic applications.

These complexity-related costs will likely include:

  • Maintaining an up-to-date microservices catalog so that DevOps teams know exactly what is available to leverage-and who to contact with questions
  • Code promotion traffic that is an order of magnitude higher as releases into production multiply due to a large number of microservices being continuously updated
  • Extremely high-frequency test/QA activity to rigorously safeguard both the quality of each microservice and the multitude of "micro-calls" between microservices via multiple tests, including functional, performance/load and user acceptance testing
  • Safeguarding performance in production for a large number of discrete microservices - each of which have their own unique infrastructure dependencies
  • Securing and enforcing compliance for a large number of discrete microservices - each of which touch different data sets with different methods
  • Fragmentation of the people and teams that have to work together in order to keep the environment running smoothly and advancing at a good, fast clip

Successfully executing on the microservices model will require more than just adding a new set of development disciplines. It will also require rethinking - and perhaps even a retooling - of end-to-end DevOps management.

Incremental costs are non-trivial
There is, of course, a common tendency to stay in denial about complexity-related costs early in the hype-and-adoption process. That's because the gains look so attractive, and it can take a lot of work to achieve them. So IT leaders can be tempted to cross the complexity bridge when they come to it.

But I'd advise against that attitude. Microservices initiatives will get bogged down if they become too resource-intensive. And once you have inefficient practices in places, it's hard to displace them with more efficient ones.

If you're moving to microservices, give plenty of thought to how you can meet your new operational challenges effectively and efficiently. Because microservices is not just a dev technique. It's a whole new way of delivering value in the application economy.

More Stories By Aruna Ravichandran

Aruna Ravichandran has over 20 years of experience in building and marketing products in various markets such as IT Operations Management (APM, Infrastructure management, Service Management, Cloud Management, Analytics, Log Management, and Data Center Infrastructure Management), Continuous Delivery, Test Automation, Security and SDN. In her current role, she leads the product and solutions marketing, strategy, market segmentation, messaging, positioning, competitive and sales enablement across CA's DevOps portfolio.

Prior to CA, Aruna worked at Juniper Networks and Hewlett Packard where-in she led executive leadership roles in marketing and engineering.

Aruna is co-author of the book, "DevOps for Digital Leaders", which was published in 2016 and was named one of Top 100 The Most Influential Women in Silicon Valley by the San Jose Business Journal as well as 2016 Most Powerful and Influential Woman Award by the National Diversity Council.

Aruna holds a Masters in Computer Engineering and a MBA from Santa Clara University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert research and analysis.Attendees can join this session to better understand how DevSecOps teams are applying lessons from W. Edwards Deming (circa 1982), Malcolm Goldrath (circa 1984) and Gene Kim (circa 2013) to improve their ability to respond to new business requirements and cyber risks.
DevOpsSUMMIT at CloudEXPO, to be held June 25-26, 2019 at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Among the proven benefits, DevOps is correlated with 20% faster time-to-market, 22% improvement in quality, and 18% reduction in dev and ops costs, according to research firm Vanson-Bourne. It is changing the way IT works, how businesses interact with customers, and how organizations are buying, building, and delivering software.
The benefits of automated cloud deployments for speed, reliability and security are undeniable. The cornerstone of this approach, immutable deployment, promotes the idea of continuously rolling safe, stable images instead of trying to keep up with managing a fixed pool of virtual or physical machines. In this talk, we'll explore the immutable infrastructure pattern and how to use continuous deployment and continuous integration (CI/CD) process to build and manage server images for any platform. Then we'll show how automate deploying these images quickly and reliability with open DevOps tools like Terraform and Digital Rebar. Not only is this approach fast, it's also more secure and robust for operators. If you are running infrastructure, this talk will change how you think about your job in profound ways.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
Sanjeev Sharma Joins November 11-13, 2018 @DevOpsSummit at @CloudEXPO New York Faculty. Sanjeev Sharma is an internationally known DevOps and Cloud Transformation thought leader, technology executive, and author. Sanjeev's industry experience includes tenures as CTO, Technical Sales leader, and Cloud Architect leader. As an IBM Distinguished Engineer, Sanjeev is recognized at the highest levels of IBM's core of technical leaders.