@DevOpsSummit Authors: Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, @CloudExpo

@DevOpsSummit: Blog Feed Post

Making Financial Sense of #PaaS, Part II @DevOpsSummit [#DevOps]

It’s very easy to make the cloud look financially attractive when pricing out a single application

Making Financial Sense of PaaS, Part Deux

Part I

In my blog, “Making Financial Sense of PaaS,” I provided an analysis of delivering a newly-developed mobile application using a variety of platforms. Subsequent to my posting I had some great conversations about the content with Brent Smithurst (@brentsmi) from ActiveState and Mark Thiele (@mthiele10) from Switch. Brent is from a PaaS software provider and Mark is a world-renowned expert on data center operations and architecture, so clearly their insight is extremely relevant and credible.

As pointed out by Mark, it’s very easy to make the cloud look financially attractive when pricing out a single application versus a portfolio of applications. Indeed, I would have to agree one of the most difficult things is to formulate an apples-to-apples comparison of cloud to data center. Even with the concept of reservations, the cloud cannot come close to the amortization of capital allocations across a portfolio of applications. The cloud is all about sizing and costing one application at a time whereas data centers should be all about economies of scale.

For the original blog I chose to use a model where capital allocation was occurring on a project-by-project basis. This is a fair estimating technique given many businesses and government agencies use this technique as a way to procure hardware and software. However, it does skew the results in a particular direction. Additionally, as noted by Brent, the original $25,000 price tag could buy the equivalent of $48,000 worth of IBM BlueMix in GB-hours. Then, finally, Mark noted that even the Hosted PaaS has some operational component to them. He is correct here and that was a clear oversight in the original estimate. IT Operations should be reviewing the logs from the Hosted PaaS instance and monitoring the health of the application and its performance.

So, while one cannot argue the original estimate given the stated assumptions, it is certainly not the whole picture for businesses that leverage virtualization and have pooled hardware resources. Since my labor estimates were based on time, I believe these are firm regardless of the approach taken. The original estimate does not imply that individuals would be hired specifically for this application. Alternatively, if we are going to leverage economies of scale and use common platform requirements across a set of applications, then we have to recognize there will be increased time required of the application infrastructure specialist and operations to handle the increased complexity. That is, as we move away from a single application running in the N-tier environment to multiple applications we need to deal with clustering, high-availability, scaling and capacity management issues that were not accounted for in the original design estimate. This increases the amount of time and, hence, costs associated with these roles.

Taking all these elements into account, the following is an estimate for a business with sunk costs in an existing data center, can add the necessary virtual machines to an existing pool without incurring significant new charges, and will leverage the software licenses across multiple applications. As you can see it changes the financials enough to consider all of them viable options. However, PaaS options could save on average $100,000 per application, which, for IT shops already spending on the order of 90% of their budget to keep the lights on could open up major opportunities for transformational activities.

PaaS Estimate

More Stories By JP Morgenthal

JP Morgenthal is a veteran IT solutions executive and Distinguished Engineer with CSC. He has been delivering IT services to business leaders for the past 30 years and is a recognized thought-leader in applying emerging technology for business growth and innovation. JP's strengths center around transformation and modernization leveraging next generation platforms and technologies. He has held technical executive roles in multiple businesses including: CTO, Chief Architect and Founder/CEO. Areas of expertise for JP include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. JP is a published author with four trade publications with his most recent being “Cloud Computing: Assessing the Risks”. JP holds both a Masters and Bachelors of Science in Computer Science from Hofstra University.

@DevOpsSummit Stories
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
For better or worse, DevOps has gone mainstream. All doubt was removed when IBM and HP threw up their respective DevOps microsites. Where are we on the hype cycle? It's hard to say for sure but there's a feeling we're heading for the "Peak of Inflated Expectations." What does this mean for the enterprise? Should they avoid DevOps? Definitely not. Should they be cautious though? Absolutely. The truth is that DevOps and the enterprise are at best strange bedfellows. The movement has its roots in the tech community's elite. Open source projects and methodologies driven by the alumni of companies like Netflix, Google and Amazon. This is a great thing for the evolution of DevOps. It can be alienating for Enterprise IT though. Learning about Netflix and their simian armies, or Facebook and their mind-melting scale is fascinating. Can you take it back to the office on Monday morning though?
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examining how the Internet and the cloud has allowed for the democratization of IT, resulting in an increased demand for the cloud and the drive to develop new ways to utilize it.