Welcome!

@DevOpsSummit Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Apache

@DevOpsSummit: Blog Post

Active vs. Passive Server Monitoring By @HoardingInfo | @DevOpsSummit [#DevOps]

It's important to determine if you are only responding to what the logs tell you about the past

Active vs. Passive Server Monitoring

By Chris Riley

Server monitoring is a requirement, not a choice. It is used for your entire software stack, web-based enterprise suites, custom applications, e-commerce sites, local area networks, etc. Unmonitored servers are lost opportunities for optimization, difficult to maintain, more unpredictable, and more prone to failure.

While it is very likely that your team has a log analysis initiative, it's important to determine if you are only responding to what the logs tell you about the past, or are you planning ahead based on the valuable log data you are monitoring and analyzing?

active-vs-passive-server-monitoring2

There are two basic approaches to server monitoring: Passive and Active. They are as much a state of mind as a process. And there are significant differences in the kinds of value each provide; each has its own advantages and disadvantages.

Passive Monitoring

Passive server monitoring looks at real-world historical performance by monitoring actual log-ins, site hits, clicks, requests for data, and other server transactions. When it comes to addressing issues in the system, the team will review historical log data, and from them that analyze the logs to troubleshoot and pinpoint issues. This was previously done with a manual pull of logs. While this helps developers identify where issues are,  using a powerful t modern log analysis service to simply  automate an existing process is a waste.

Passive server monitoring only shows how your server handles existing conditions, but it may not give you much insight into how your server will deal with future ones. For example, if one of the components of the system, a database server, is likely to be overloaded when the load rate of change is reached. This is not going to be clear when server log data has already been recorded, unless your team is willing to stare at a graph in real-time, 24/7...which has been nearly the case in some NOC operations I have witnessed.

Active Monitoring

The most effective way to get past these limits is by using active server monitoring. Active monitoring is the approach that leverages smart recognition algorithms to take current log data and use it to predict future states. This is done by some complex statistics (way over my head) that compare real-time to previous conditions, or past issues. For example it leverages anomaly detection, steady state analysis, and trending capabilities to predict that a workload is about to hit its max capacity. Or there is a sudden decrease in external network-received packets, a sign of public web degradation.

Besides finding out what is possibly going to happen. It also helps to avoid the time spent on log deep dives. Issues will sometimes still pass you by, and you will still need to take a deeper look, but because information is pushed to you, some of the work is already done, and you can avoid the log hunt.

Oh and it can help the product and dev team from an architectural standpoint. If, for example, a key page is being accessed infrequently, or if a specific link to that page is rarely used, it may indicate a problem with the design of the referring page, or with one of the links leading to that page. A close look at the log can also tell you whether certain pages are being accessed more often than expected  -  which can be a sign that the information on those pages should be displayed or linked more prominently.

Any Form of Server Monitoring is Better Than None

Log analysis tools are the heart of both approaches. Log analysis can indicate unusual activity which might slip past and already overloaded team. Another serious case is security. A series of attempted page hits that produce "page not found" or access denied" errors, for example, could just be coming from a bad external link  -  or they could be signs of an attacker probing your site. HTTP request that are pegging a server process could be a sign that a denial of service attack has begun.

It is hard to make the shift. Why? Not because you and your team are not interested in thinking ahead. But more so because many operations are entrenched in existing processes that are also reactive. And sometimes teams are just unaware that their tool can provide this type of functionality. Until one day it does it automatically for you, and you have a pleasant surprise.

Active server monitoring can mean the difference between preventing problems before they get a chance to happen, or rushing to catch up with trouble after it happens. And they are the difference between a modern version of an old process, and moving forward to a modern software delivery pipeline.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
For better or worse, DevOps has gone mainstream. All doubt was removed when IBM and HP threw up their respective DevOps microsites. Where are we on the hype cycle? It's hard to say for sure but there's a feeling we're heading for the "Peak of Inflated Expectations." What does this mean for the enterprise? Should they avoid DevOps? Definitely not. Should they be cautious though? Absolutely. The truth is that DevOps and the enterprise are at best strange bedfellows. The movement has its roots in the tech community's elite. Open source projects and methodologies driven by the alumni of companies like Netflix, Google and Amazon. This is a great thing for the evolution of DevOps. It can be alienating for Enterprise IT though. Learning about Netflix and their simian armies, or Facebook and their mind-melting scale is fascinating. Can you take it back to the office on Monday morning though?
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examining how the Internet and the cloud has allowed for the democratization of IT, resulting in an increased demand for the cloud and the drive to develop new ways to utilize it.