Welcome!

@DevOpsSummit Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, @CloudExpo, Cloud Security, @DXWorldExpo

@DevOpsSummit: Blog Feed Post

Nagios Is Not a Monitoring Strategy

A good monitoring strategy starts by identifying all of the actors who needs access to data

When I visit clients to talk about DevOps, I usually ask them what their monitoring strategy is. Too often, the answer I hear is "We use Nagios". I think Nagios is a great tool, but it sure is not a strategy. Nagios does a good job of monitoring infrastructure. It will alert you when you are running out of disk, CPU, or memory. I call this reactive monitoring. In other words, Nagios is telling you that your resources are getting maxed out and you are about to have issues. Proactive monitoring focuses more on the behavior of the applications and attempts to detect when metrics are starting to stray away from their normal baseline numbers. Proactive monitoring alerts you that the system is starting to experience symptoms that can lead to a degradation of performance or capacity issues which is more preferable than Nagios telling you are about to be screwed. With reactive monitoring, it is not uncommon that customers start complaining about the same time that the Nagios alerts start going off. The goal of proactive monitoring is to head off issues so that customers don't even notice.

The next question I ask is "What things are you monitoring?"  A typical answer usually revolves around various infrastructure assets and databases. That's a good start but there is much more to consider. But first, let's talk about why proactive monitoring is so critical. In the pre-cloud days we used to ship software to our customers where they would install the software, perform capacity planning tasks, manage the infrastructure, and operate the day-to-day activities. Once we shipped the code we were done. In today's world, we are no longer shipping product. Instead we are delivering services that are always on. The customer no longer owns and operates the infrastructure and the software. Instead they pay for a service and expect that service to run reliably all the time. To meet those expectations, we need a more robust monitoring strategy. We need to monitor more than just the infrastructure.

A good monitoring strategy starts by identifying all of the actors who needs access to data and all of the categories of data that needs to be tracked. Some metrics are monitored in real-time while others are mined from log data. Every good monitoring strategy is accompanied with a sound logging solution. In order to perform analytics to predict trends within the data, one must collect various data points ranging from customer usage activity, security controls, deployment activities, and much more. The following presentation goes into much more detail about the different areas that should be monitored and why different actors need these data points to perform their jobs.

The bottom line is, before building in the cloud, it pays to invest some time into a sound monitoring strategy. I have seen too often where teams don't think through how to support these highly distributed, always on SaaS solutions and end up delivering software that does not meet the reliability and quality expectations of  customers. Monitoring provides feedback to developers, product owners, operators, and even customers so that systems can continuously be improved. Nagios is great, but there is no single monitoring solution that can implemented to effectively operate today's always on services.

Read my latest post on DevOps.com.

Read the original blog entry...

More Stories By Mike Kavis

Mike Kavis is Vice President & Principal Cloud Architect at Cloud Technology Partners. He has served in numerous technical roles such as CTO, Chief Architect, and VP positions with over 25 years of experience in software development and architecture. A pioneer in cloud computing, Mike led a team that built the world’s first high speed transaction network in Amazon’s public cloud and won the 2010 AWS Global Startup Challenge.

An expert in cloud security, he is the author of “Architecting the Cloud: Design Decisions for Cloud Computing Service Models (IaaS, PaaS, SaaS)” from Wiley Publishing.

@DevOpsSummit Stories
Serverless Architecture is the new paradigm shift in cloud application development. It has potential to take the fundamental benefit of cloud platform leverage to another level. "Focus on your application code, not the infrastructure" All the leading cloud platform provide services to implement Serverless architecture : AWS Lambda, Azure Functions, Google Cloud Functions, IBM Openwhisk, Oracle Fn Project.
The dream is universal: heuristic driven, global business operations without interruption so that nobody has to wake up at 4am to solve a problem. Building upon Nutanix Acropolis software defined storage, virtualization, and networking platform, Mark will demonstrate business lifecycle automation with freedom of choice and consumption models. Hybrid cloud applications and operations are controllable by the Nutanix Prism control plane with Calm automation, which can weave together the following: database as a service with Era, micro segmentation with Flow, event driven lifecycle operations with Epoch monitoring, and both financial and cloud governance with Beam. Combined together, the Nutanix Enterprise Cloud OS democratizes and accelerates every aspect of your business with simplicity, security, and scalability.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
ScaleMP is the leader in virtualization for in-memory high-end computing, providing higher performance and lower total cost of ownership as compared with traditional shared-memory systems. The company's innovative Versatile SMP (vSMP) architecture aggregates multiple x86 systems into a single virtual x86 system, delivering an industry-standard, high-end shared-memory computer. Using software to replace custom hardware and components, ScaleMP offers a new, revolutionary computing paradigm. vSMP Foundation is a software-only solution that reduces overall system (CAPEX) and management complexities (OPEX) costs. vSMP Foundation aggregates up to 128 x86 systems to create a single system with up to 32,768 cpus and up to 2 PB of shared memory.
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.