Welcome!

@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Article

Scaling Incident Management | @DevOpsSummit #DevOps #APM #Monitoring

Incident management is paramount to the success of any modern ITOps team

Scaling Incident Management
By Patrick O'Fallon

Incident management is paramount to the success of any modern ITOps team. However, much like growing a business, scaling incident management can also trigger growing pains. As the landscape of devices, applications, and systems grows - each requiring monitoring - so too, does the alert noise and complexity around management for on-call staff. With an increasing number of engineers on your team, it can be difficult to on-board and implement new notification policies and after-hours operations to ensure your team is efficient and load is fairly distributed. And the push towards hybrid models of IT and bimodal IT environments can also complicate incident management. Nevertheless, with a few tried and true techniques, you can scale incident management in a planned, deliberate, organized, and effective way.

Don't fall victim to your changing ITOps environment
Let's first understand the problem with an example where scaling becomes a serious issue.
You've finally dialed in your incident management process, only to shortly after learn that your company has bought a new business. Now your Ops team is taking over IT for the new environment, in addition to what you're already responsible for. At first glance, you think of the perfect scenario in which you can simply apply the same tools and methodology to this entirely new stack.

However, reality is rarely perfect - the new company may leverage a different tech stack and different incident management monitoring tools and methodologies. While this scenario is incredibly daunting, it's very similar to any growth scenario - whether it be growing your IT team, or adopting more agile and bimodal ITOps structures. Whichever scale scenario you may face, below are some ideas for any organization that is working on scaling their monitoring, incident management, and team.

Identify the main areas of scale
Are you implementing new hardware, software, or services? Are there new complexities within your future state ITOps environment? Has your engineering team just grown? Have you inherited an application in which code errors need to be reported? In all cases, you must identify the areas in which your ITOps team is being forced to scale your operations.

Monitoring Tools
Ensuring coverage of your monitoring tools across your entire stack is paramount to the success of scaling. To adopt to this change, don't be afraid to implement multiple or entirely new monitoring systems outside of your current stack. The goal of these systems is to gain full-stack visibility, and in many cases this requires implementing different monitoring tools in order to appropriately monitor disparate and new systems. But to truly support organized scale, there needs to be a way to normalize, de-dupe, correlate, and gain actionable insights from all this data. All the events generated by these monitoring tools must be centralized in a single hub, from which they can be triaged and routed to the right on-call engineer.

Noise Reduction
When monitoring is in place, the goal is then to understand the data for effective incident resolution. Adjusting the routing behavior across your monitoring tools and configuring the appropriate thresholding is a great next step to ensure your team does not experience alert fatigue once you have implemented new tools. Aggregating this data and suppressing or filtering out non-actionable alerts from paging within a common incident management system is critical to help reduce the noise and enrich the visibility of incidents across your entire stack.

Incident Management
A comprehensive incident management platform will help integrate data from all your tools and grow with you as you scale. It not only unifies all your disparate monitoring alerts into one common system, it supports growth in your engineering team without generating confusion around resource management. Moreover, it helps facilitate more accountability as well as more organized collaboration. As a bonus, you can leverage incident analytics to show your boss how well your ITOps team is managing and resolving outages.

Scale and complexity are not going away
The world of ITOps is evolving rapidly, but one thing is clear - IT teams are being ordered to scale their operations in almost every capacity. Legacy ITOps environments are transitioning to and adopting more hybrid and agile architectures and frameworks. Users are continually demanding faster and more reliable access to data across different devices. As a result, it's necessary for ITOps teams to be equipped with a plan for scaling. Incident management is now a necessity as the stakes of downtime get higher.

The post Scaling Incident Management appeared first on PagerDuty.

More Stories By PagerDuty Blog

PagerDuty’s operations performance platform helps companies increase reliability. By connecting people, systems and data in a single view, PagerDuty delivers visibility and actionable intelligence across global operations for effective incident resolution management. PagerDuty has over 100 platform partners, and is trusted by Fortune 500 companies and startups alike, including Microsoft, National Instruments, Electronic Arts, Adobe, Rackspace, Etsy, Square and Github.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.