Welcome!

DevOps Journal Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Mike Kavis

Blog Feed Post

10 Things We Forgot to Monitor

10 Things We Forgot to Monitor

After publishing Ben’s blog post about “Memory Monitoring with LXC” we realized there is a lot of interest in articles about Monitoring. I got in touch with Jehiah Czebotar, Head of Engineering at bitly, and asked him if we could republish his blog post about things bitly forgot to monitor.

Jehiah originally published his blog post on the bitly engeneering blog. You can find Jehiah on twitter and on his personal page. Definitely check out his “Personal Annual Reports“.


There is always a set of standard metrics that are universally monitored (Disk Usage, Memory Usage, Load, Pings, etc). Beyond that, there are a lot of lessons that we’ve learned from operating our production systems that have helped shape the breadth of monitoring that we perform at bitly.

One of my favorite all-time tweets is from @DevOps_Borat

What follows is a small list of things we monitor at bitly that have grown out of those (sometimes painful!) experiences, and where possible little snippets of the stories behind those instances.

1 – Fork Rate

We once had a problem where IPv6 was intentionally disabled on a box via options ipv6 disable=1 and alias ipv6 off in /etc/modprobe.conf. This caused a large issue for us: each time a new curl object was created, modprobe would spawn, checking net-pf-10 to evaluate IPv6 status. This fork bombed the box, and we eventually tracked it down by noticing that the process counter in /proc/stat was increasing by several hundred a second. Normally you would only expect a fork rate of 1-10/sec on a production box with steady traffic.

2 – flow control packets

TL;DR; If your network configuration honors flow control packets and isn’t configured to disable them, they can temporarily cause dropped traffic. (If this doesn’t sound like an outage, you need your head checked.)

$ /usr/sbin/ethtool -S eth0 | grep flow_control
rx_flow_control_xon: 0
rx_flow_control_xoff: 0
tx_flow_control_xon: 0
tx_flow_control_xoff: 0

Note: Read this to understand how these flow control frames can cascade to switch-wide loss of connectivity if you use certain Broadcom NIC’s. You should also trend these metrics on your switch gear. While at it, watch your dropped frames.

3 – Swap In/Out Rate

It’s common to check for swap usage above a threshold, but even if you have a small quantity of memory swapped, it’s actually the rate it’s swapped in/out that can impact performance, not the quantity. This is a much more direct check for that state.

4 – Server Boot Notification

Unexpected reboots are part of life. Do you know when they happen on your hosts? Most people don’t. We use a simple init script that triggers an ops email on system boot. This is valuable to communicate provisioning of new servers, and helps capture state change even if services handle the failure gracefully without alerting.

5 – NTP Clock Offset

If not monitored, yes, one of your servers is probably off. If you’ve never thought about clock skew you might not even be running ntpd on your servers. Generally there are 3 things to check for. 1) That ntpd is running, 2) Clock skew inside your datacenter, 3) Clock skew from your master time servers to an external source.

We use check_ntp_time for this check.

6 – DNS Resolutions

Internal DNS – It’s a hidden part of your infrastructure that you rely on more than you realize. The things to check for are 1) Local resolutions from each server, 2) If you have local DNS servers in your datacenter, you want to check resolution, and quantity of queries, 3) Check availability of each upstream DNS resolver you use.

External DNS – It’s good to verify your external domains resolve correctly against each of your published external nameservers. At bitly we also rely on several CC TLD’s and we monitor those authoritative servers directly as well (yes, it’s happened that all authoritative nameservers for a TLD have been offline).

7 – SSL Expiration

It’s the thing everyone forgets about because it happens so infrequently. The fix is easy, just check it and get alerted with enough timeframe to renew your SSL certificates.

define command{
    command_name    check_ssl_expire
    command_line    $USER1$/check_http --ssl -C 14 -H $ARG1$
}
define service{
    host_name               virtual
    service_description     bitly_com_ssl_expiration
    use                     generic-service
    check_command           check_ssl_expire!bitly.com
    contact_groups          email_only
    normal_check_interval   720
    retry_check_interval    10
    notification_interval   720
}

8 – DELL OpenManage Server Administrator (OMSA)

We run bitly split across two data centers, one is a managed environment with DELL hardware, and the second is Amazon EC2. For our DELL hardware it’s important for us to monitor the outputs from OMSA. This alerts us to RAID status, failed disks (predictive or hard failures), RAM Issues, Power Supply states and more.

9 – Connection Limits

You probably run things like memcached and mysql with connection limits, but do you monitor how close you are to those limits as you scale out application tiers?

Related to this is addressing the issue of processes running into file descriptor limits. We make a regular practice of running services with ulimit -n 65535 in our run scripts to minimize this. We also set Nginx worker_rlimit_nofile.

10 – Load Balancer Status.

We configure our Load Balancers with a health check which we can easily force to fail in order to have any given server removed from rotation.We’ve found it important to have visibility into the health check state, so we monitor and alert based on the same health check. (If you use EC2 Load Balancers you can monitor the ELB state from Amazon API’s).

Various Other things to watch

New entries written to Nginx Error Logs, service restarts (assuming you have something in place to auto-restart them on failure), numa stats, new process core dumps (great if you run any C code).


We want to thank Jehiah for making his original article available to our readers. Is there something missing on this list in your opportunity? Let us know in the comments!

Codeship – A hosted Continuous Deployment platform for web applications

Read the original blog entry...

More Stories By Manuel Weiss

I am the cofounder of Codeship – a hosted Continuous Integration and Deployment platform for web applications. On the Codeship blog we love to write about Software Testing, Continuos Integration and Deployment. Also check out our weekly screencast series 'Testing Tuesday'!

Latest Stories from DevOps Journal
Founded in 1997, ActiveState is a global leader providing software application development and management solutions. The Company's products include: Stackato, a commercially supported Platform-as-a-Service (PaaS) that harnesses open source technologies such as Cloud Foundry and Docker; dynamic language distributions ActivePerl, ActivePython and ActiveTcl; and developer tools such as the popular Komodo Edit and Komodo IDE. Headquartered in Vancouver, Canada, ActiveState is trusted by customers and partners worldwide, across many industries including telecommunications, aerospace, software, financial services and CPG. ActiveState is proven for the enterprise: More than two million developers and 97% of Fortune 1000 companies use ActiveState's solutions to develop, distribute, and manage their software applications. Global customers like Bank of America, CA, Cisco, HP, Lockheed Martin and Siemens rely on ActiveState for faster development, ensuring IT governance and compliance, and accelerating time-to-market.
BlueBox bridge the chasm between development and infrastructure. Hosting providers are taking standardization and automation too far. For many app developers it does nothing but spawn mayhem and more work. They have to figure out how their creations live on a pre-fab infrastructure solution full of constraints. Operations-as-a-Service is what BlueBox does. BlueBox utilizes development tools such as OpenStack, EMC Razor, Opscode’s Chef and BlueBox's proprietary tools give the power to do the unorthodox things which most hosting providers shun.
SYS-CON Events announced today that Serena Software will exhibit at DevOps Summit Silicon Valley, which will take place on November 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Serena Software supports DevOps and Continuous Delivery by providing application deployment automation and software release management solutions to replace slow and error-prone manual processes. 2,500 enterprises around the world trust Serena to help them develop and deploy better software.
Qubell, an innovator in application deployment and configuration management, empowers online companies to do what they have never been able to do before: put into consumers' hands innovative new features and services, almost as fast as they can conceive them, without sacrificing control, reliability or uptime. Qubell emerged from stealth in the summer of 2013 (see related press release) and announced that Kohl's completed its initial implementation (see press release). Founded by pioneers in enterprise cloud applications and services, Qubell has its headquarters in Menlo Park, Calif. For more information, visit qubell.com.
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option through September. On site registration price of $1,95 will be set to 'free' for delegates who register during special offer. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site. Your DevOps Summit registration will also allow access to @ThingsExpo sessions and exhibits. Register For DevOps Summit "FREE" (limited time) ▸ Here
The old monolithic style of building enterprise applications just isn't cutting it any more. It results in applications and teams both that are complex, inefficient, and inflexible, with considerable communication overhead and long change cycles. Microservices architectures, while they've been around for a while, are now gaining serious traction with software organizations, and for good reasons: they enable small targeted teams, rapid continuous deployment, independent updates, true polyglot languages and persistence layers, and a host of other benefits. But truly adopting a microservices architecture requires dramatic changes across the entire organization, and a DevOps culture is absolutely essential.
High performing enterprise Software Quality Assurance (SQA) teams validate systems are ready for use – getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time – bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of Enterprise SQA? Does the SQA function become redundant?
Achieve continuous delivery of applications by leveraging ElasticBox and Jenkins. In his session at DevOps Summit, Monish Sharma, VP of Customer Success at ElasticBox, will demonstrate how you can achieve the following using ElasticBox and the ElasticBox Jenkins Plugin: Create consistency across dev, staging, and production environments Continuous delivery across multiple clouds to handle high loads Ensure consistent policy management across environments: tagging, admin boxes, traceability Spin up machines and environments quickly Deploy applications to any cloud Enable real-time collaboration between developers and operations
Docker offers a new, lightweight approach to application portability. Applications are shipped using a common container format and managed with a high-level API. Their processes run within isolated namespaces that abstract the operating environment independently of the distribution, versions, network setup, and other details of this environment. This "containerization" has often been nicknamed "the new virtualization." But containers are more than lightweight virtual machines. Beyond their smaller footprint, shorter boot times, and higher consolidation factors, they also bring a lot of new features and use cases that were not possible with classical virtual machines.
WaveMaker CEO Samir Ghosh is taking a new pass at aPaas, and leveraging the increasingly popular Docker open-source platform, with the announcement of WaveMaker Enterprise. The new version of the company's eponymous software “enables instant, end-to-end custom web app creation and management by professional and non-professional developers (alike) and development teams,” according to the company. We asked Samir a few questions about this, and here's what he had to say: Cloud Computing Journal: You've mentioned the previous challenge of business-side developers making that jump from design to deployment. What sort of learning curve will they still face with Wavemaker Enterprise? Samir Ghosh: “Business-side developers” can include non-programming business users or professional developers under tight schedules or with limited mobile or front-end programming expertise. Both can use WaveMaker to meet their app development needs, but may have different deployment needs. I think business users just want their app to run as easily as possible. In WaveMaker, they can literally click a button and their application will run, either on our public cloud or on the enterprise’s private...