|By Jyoti Bansal||
|June 19, 2014 12:00 PM EDT||
Most technology folks have heard Marc Andreessen’s provocative statement, “Software is eating the world.” Whether you agree fully or not, you’re realizing that your business critical software applications increasingly drive both the top-line revenue growth and the bottom-line operational efficiency of your company – and often form the pillar of your business identity.
Legacy monitoring systems you have in place, capturing and alerting on scores of infrastructure level metrics, have helped protect your technology investment to some degree. I’ve worked for two leaders in that space, HP and BMC, so can personally attest to the real benefits of server, network and database monitoring that clients achieve. However, as consumer demand for superior services and faster innovation accelerate, we see that the applications and associated business transactions are what end-users ultimately care about. You can no longer afford slow response time, let alone application outage situations, as customers will delay or abandon purchases – or worse yet switch to a competitor when you have unacceptable application performance. And unfortunately these “Yellow Light” or slow performance situations are the most challenging to detect and fix!
So you’ve come to the conclusion that you need a full-fledged application performance management (APM) solution. The question some companies wrestle with at this point is: “Should we invest in an 3rd party APM solution or build it ourselves?”
Four key considerations should be:
Upfront Costs – such as Initial Project Build & Software License cost.
Ongoing, Annual Solution Costs – such as Server / Storage footprint, administrative maintenance & support, & agile development / release activities.
Solution Capabilities Driving Benefits – chiefly, the ability to drive down the number of performance defects in production, as well as the MTTR when issues do occur.
Opportunity Costs – personnel resources working on in-house APM, versus are there mature 3rd party APM solutions available for purchase.
1. Upfront Costs
It’s difficult to estimate exactly how long it would take a company to develop a basic application monitoring tool in-house – but we’ll give it a logical shot. Of course, on the plus side, the company would avoid spending money on a “commercial off-the-shelf” (COTS) 3rd party software application. Based on experience for design, development, testing, and release, a good estimate for an in-house Initial Project Build is a team of 2-3 Engineers about 6 months to have a basic, log parsing and alerting tool ready. A more robust tool for a medium to large sized deployment may be 2-3x this size and investment. A gaming company we work with, when assessing an in-house build situation, estimated an APM product development lifecycle in the 12 to 18 month range. Why? APM functionality that involves tracing the user experience of distributed transactions, where every call needs to be traced across each service layer, is non-trivial technical work. Also, you’ll need to factor in one-time hardware and prerequisite software purchasing costs. So a ballpark cost from $400K to well into seven figures is reasonable.
What would be the upfront software licensing cost of a 3rd party APM solution? Probably in a similar range, perhaps higher in certain cases. Also, many APM companies offer lower annual SaaS subscription costs as an alternative to full upfront licensing payments – which add up to the same licensing fees over 3-5 years. However, you should take into consideration that some solutions such as AppDynamics, which can be downloaded and installed via self-service within hours, provide immediate Time-to-Value versus waiting for a full software development lifecycle to occur for a custom built solution.
Advantage: Cost = In-house (slight? depends on robustness of APM solution built), Time-to-Value = 3rd Party APM
2. Ongoing, Annual Solution Costs
First, let’s determine the hardware & storage footprint required for the solution. Typical in-house developed solutions architect for over-capacity as a rough estimate because of unknowns, and to avoid encountering limitations & performance issues. A good estimate per environment (Dev, Test, Prod) may be 2 Large Servers and 16 TB of Storage for a starter in-house APM solution. This cost might run in the $100K to $135K range per year.
For 3rd party APM solutions, the specs are well-known, validated, and published. A leading APM solution like AppDynamics has been built and tuned via R&D by specialists over several years. The footprint for a similar medium-sized deployment would be 1 Medium Server and 6 TB of Storage, for a rough cost of about $40-50K per year – or less than half of the in-house cost.
From an FTE support perspective for the in-house solution, you have to understand the administrative, support, & enhancement / new development labor required. A good admin & support estimate would run about 1-2 FTEs, and new development might run 2 engineering FTEs to keep up with enhancement requests and coverage for new applications & technologies. Remember, users will not expect the APM solution to stay static! You might start with basic metric stores and time series data, but this will quickly run out of steam. Next, you’ll want to build a baseline engine for the metric store based on load patterns and percentiles of metrics, as examples. Demand for dashboarding and security access control requirements come into play, and require much design and testing work especially as the solution scales. So this annual labor cost would run in the $375K plus range.
On top of that, in today’s Agile DevOps world, there are additional maintenance / revision labor costs each time a business application is released to production. Appliances and/or monitoring agents need updating, and both application and business transaction topology maps likely need to be revised manually. As the frequency of application release grows, often to a bi-weekly application release schedule, these are not insignificant tasks. We estimate in a medium sized deployment, this could require about 2,000 labor hours per year to keep up, or about $100K.
In the AppDynamics APM world, these types of capabilities are already built into the solution. So the maintenance per application release is zero since there is automated application discovery, mapping, and business transaction flows out of the box. The ongoing FTE administrative & maintenance requirements for a medium-sized deployment are 1 FTE, or about $125K/year. And new development is covered in the license costs via the hundreds of R&D professionals contributing to the various releases of the 3rd party software.
Advantage: 3rd Party APM (large, especially adding up multiple years)
3. Solution Capabilities Driving Benefits
Next we look at the ability of an APM solution to provide benefits to your enterprise – which can be grouped into reducing costs, mitigating risks, and increasing or protecting revenue. Two key performance metrics we suggest for measuring impact on cost, risk, and revenue are:
# defects released to production
Mean time to repair (MTTR) per performance issue
At AppDynamics, this is where we’ve invested our R&D dollars since 2008, and our industry-exceeding Net Promoter Score (NPS) of 84 – i.e., more than 8 in 10 customers would recommend us to a friend or colleague – is a testimony to our ability to achieve these benefits.
By leveraging AppDynamics in Pre-Production, our clients often report reduction in performance issues released to Production of 40%. And by watching every line of code executed in Production, and measuring & scoring each transaction, we provide a “3 clicks to resolution” approach that often reduces MTTR per performance issue by 65% or more. This is true of small application environments, as well as large deployments over 20,000 JVMs.
For an in-house solution, you have to assess what it would take to build similar APM capabilities to achieve these levels of defect and MTTR reduction. How many years, developers, and dollars? (And, as one client executive recently told us, “If I could do this, why wouldn’t my company be competing in the APM software space?!”) Or alternatively and more likely, “let’s stitch something low-cost together” in-house. Admittedly this sacrifices capability for cost cost, which translates into fewer features to address the MTTR and # of performance issue challenges you face.
For ballpark purposes, then, let’s credit the in-house solution in helping reduce both # of defects and MTTR up to 20%. If we use an industry average cost per minute of slowness / downtime equating to $500 (inclusive of both labor and revenue protection factors), and there is one Sev1 performance issue per application per quarter – the difference between the in-house solution versus an APM solution would equate to over $1M per year for a medium sized deployment.
Advantage: 3rd Party APM (not close; and these add up year over year, too)
4. Opportunity Costs
These costs deal chiefly with choosing what is most valuable for your developers to spend their time on. Especially in today’s high-technology enterprises, there are excellent engineers capable of building fantastic tools across a wide range of areas – so it is tempting to initiate an in-house APM build project and get something out the door. However, APM is not these engineers’ specialty and their talents are often better utilized on alternative software projects related to the core goods & services your company sells to your end-use customers that drive revenue.
This is an area we won’t attempt to quantify, as it’s more of a qualitative assessment and business decision specific to your organization. But with a fairly mature and continually developing 3rd Party APM market, for most enterprises it’s logical to say….
Advantage: 3rd Party APM
While the initial, upfront set of costs for an in-house vs. 3rd party APM solution purchase may be about the same (license vs build) – which leads some organizations to consider a “Do It Yourself” approach – there are significant ongoing annual costs for the care and feeding of an in-house APM solution compared to the 3rd party APM alternative. These include the infrastructure footprint, as well as labor costs associated with administration, maintenance & enhancements.
The biggest differential in cost is typically related to the chief purpose of an APM solution – how often does it proactively reduce the number of production defects, and how fast does it help you resolve performance issues when they do inevitably occur?
For a medium sized deployment, the total cost / benefit advantages of a 3rd party APM solution easily exceed $1M per year when compared to the in-house build alternative. This benefit accumulates year over year. And it’s worth mentioning here at AppDynamics, we achieve magnitudes of benefit even beyond other 3rd party APM solutions with lesser capabilities. We’ve leveraged the feedback of our over 1,000 customers during the past several years to drive R&D and greater benefit realization.
For AppDynamics, these advantages stem from:
The way our solution is architected to require minimal setup, upkeep and Time-to-Value, while providing ongoing Ease of Use.
Key capabilities – such as transaction tracing across complex, distributed applications, in your data center and the cloud – which lead to significant improvement in KPIs such as # performance defects and MTTR.
Our ability to intelligently scale to support the most complex and largest Pre-Production and Production environments.
Thought-leadership expanding into our “Application Intelligence” platform with a host of new modules and capabilities.
So when assessing an in-house vs 3rd party APM solution, consider a multi-year TCO horizon and not just a short-term initial cost estimate. Our personnel at AppDynamics standby to provide you help in not only getting a deep-dive on the APM market and our solution features, but also to analyze the value of APM choices via a detailed ROI assessment.
Thinking of trying a next generation APM solution rather than build it yourself? Try AppDynamics for free today!
The post Thinking About APM? 4 Key Considerations for Buy vs. Build Your Own written by Mike Murphy appeared first on Application Performance Monitoring Blog from AppDynamics.
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, Capital One now has 500+ Agile Teams delivering quality software via Agile and DevOps practices.
Jan. 17, 2017 05:15 AM EST Reads: 9,220
SYS-CON Events announced today that MobiDev, a client-oriented software development company, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. MobiDev is a software company that develops and delivers turn-key mobile apps, websites, web services, and complex software systems for startups and enterprises. Since 2009 it has grown from a small group of passionate engineers and business managers to a full-scale mobile software company with over 200 developers, designers, quality assurance engineers, project managers in house, specializing in the world-class mobile and web development.
Jan. 17, 2017 04:15 AM EST Reads: 1,779
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud using both containers and VMs.
Jan. 17, 2017 03:45 AM EST Reads: 3,370
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @DevOpsSummit 19th Cloud Expo, moderated by DevOps Conference Chair Andi Mann, panelists will explore this emerging use of Big Data generated by the digital business to complete the DevOps feedback loop, and inform operational and application decisions.
Jan. 17, 2017 03:45 AM EST Reads: 2,711
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed for digital business, Catchpoint is the only end-user experience monitoring (EUM) platform that can simultaneously capture, index and analyze object level performance data inline across the most extensive monitor types and node coverage, enabling a smarter, faster way to preempt issues and optimize service delivery. More than 350 customers in over 30 countries trust Catchpoint to strengthen their ...
Jan. 17, 2017 03:15 AM EST Reads: 6,221
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between what is available in the public cloud and the early private clouds?
Jan. 17, 2017 12:45 AM EST Reads: 6,014
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture.
Jan. 16, 2017 08:00 PM EST Reads: 693
The proper isolation of resources is essential for multi-tenant environments. The traditional approach to isolate resources is, however, rather heavyweight. In his session at 18th Cloud Expo, Igor Drobiazko, co-founder of elastic.io, drew upon his own experience with operating a Docker container-based infrastructure on a large scale and present a lightweight solution for resource isolation using microservices. He also discussed the implementation of microservices in data and application integration in general, shared the challenges they ran into using open source technologies like Apache Mesos and Marathon for Docker containers and what solutions they found to deal with them.
Jan. 16, 2017 06:45 PM EST Reads: 3,494
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the works because of misaligned incentives.
Jan. 16, 2017 06:00 PM EST Reads: 424
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of DevOps with containers and the benefits. In addition, he discussed known issues and solutions for enterprise applications in containers.
Jan. 16, 2017 05:00 PM EST Reads: 4,029
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
Jan. 16, 2017 03:30 PM EST Reads: 4,832
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 16, 2017 02:15 PM EST Reads: 5,281
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Jan. 16, 2017 01:30 PM EST Reads: 3,325
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also discuss how the "Ops" side of DevOps is making their life easier and becoming invisible to developers for storage-related provisioning and application performance.
Jan. 16, 2017 01:00 PM EST Reads: 1,037
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
Jan. 16, 2017 01:00 PM EST Reads: 3,659
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Jan. 16, 2017 12:30 PM EST Reads: 5,493
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
Jan. 16, 2017 12:30 PM EST Reads: 5,028
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Jan. 16, 2017 12:30 PM EST Reads: 3,380
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Francisco, is developing the next generation of cloud monitoring required for microservices and DevOps.
Jan. 16, 2017 12:00 PM EST Reads: 2,422
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and Embedded Systems worldwide. Supermicro is committed to protecting the environment through its “We Keep IT Green®” initiative and provides customers with the most energy-efficient, environmentally friendly solutions available on the market.
Jan. 16, 2017 11:30 AM EST Reads: 5,692
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to change lives by teaching Linux and cloud technology to the tens of thousands of students that learn at the Linux Academy.
Jan. 16, 2017 11:30 AM EST Reads: 1,893
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Jan. 16, 2017 11:30 AM EST Reads: 4,162
Every successful software product evolves from an idea to an enterprise system. Notably, the same way is passed by the product owner's company. In his session at 20th Cloud Expo, Oleg Lola, CEO of MobiDev, will provide a generalized overview of the evolution of a software product, the product owner, the needs that arise at various stages of this process, and the value brought by a software development partner to the product owner as a response to these needs.
Jan. 16, 2017 05:30 AM EST Reads: 1,110
"We are an all-flash array storage provider but our focus has been on VM-aware storage specifically for virtualized applications," stated Dhiraj Sehgal of Tintri in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 16, 2017 03:00 AM EST Reads: 2,401
"Splunk basically takes machine data and we make it usable, valuable and accessible for everyone. The way that plays in DevOps is - we need to make data-driven decisions to delivering applications," explained Andi Mann, Chief Technology Advocate at Splunk and @DevOpsSummit Conference Chair, in this SYS-CON.tv interview at @DevOpsSummit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Jan. 16, 2017 12:45 AM EST Reads: 1,951