Welcome!

@DevOpsSummit Authors: Dalibor Siroky, Pat Romanski, Elizabeth White, Liz McMillan, Stackify Blog

Related Topics: @DevOpsSummit, Containers Expo Blog, Agile Computing

@DevOpsSummit: Blog Post

Application-Aware Network Performance Monitoring | @DevOpsSummit #APM #DevOps

Comparing Network Topology Mapping Tools and Application-Aware Network Performance Monitoring

Application-Aware Network Performance Monitoring for Best User Experience

Network Performance Monitoring tools that collect traffic flows (e.g., NetFlow, IPFIX, SFlow) provide much greater insight in what is happening in your network. One of the primary reasons for deploying network performance monitoring tools is to gain insight into the quality of the end user's experience and how applications are performing. This key capability is lacking in a network topology mapping tool. Quite simply put, network topology mapping is not a monitoring tool.

Ulrica de Fort-Menares, vice president of product strategy at LiveAction, brought up the subject of dynamic network topology mapping tools, saying that her customers are often asking to compare them with network performance monitoring tools. The topic will be discussed at the LiveAction Annual User Conference, taking place in September in San Francisco.

Let's explain first what a dynamic network topology mapping tool is, as described by the LiveAction executive:

A dynamic network topology map provides an interactive, animated visualization of the connections between network elements and end systems. Many network management solutions use discovery capabilities to find what elements you have in the network. Some would go one step further by discovering how network elements are connected and put them together to give you a dynamic network topology map. Using a combination of protocols such as Cisco Discovery Protocol (CDP)/Link-Layer Discovery Protocol (LLDP), SNMP data and the Command Line Interface (CLI) information collected, network information can be displayed on the map to drive troubleshooting diagnoses in real-time and historically. Example of useful diagnostics information includes interface errors, router down and link down events. Building a model based on this information, you can map a traffic path between point A and point B. The ability to perform path analysis makes troubleshooting more intuitive. For the purpose of this discussion, we can call this type of network management tools network topology mapping tools. Network topology mapping tools are particularly good for network documentation and ease network troubleshooting if you suspect the problem is caused by either a topology or configuration change.

At a glance, network topology mapping tools appear to have overlapping functions with network performance monitoring tools. Both tools discover the network, present a network topology map, collect SNMP and CLI data from network elements, perform path analysis and they are used by network engineers for troubleshooting.

What users really want is to compare network topology mapping tools and application-aware network performance monitoring tools before making a decision, per De Fort-Menares, and she often gets this question from customers.

A good definition of network performance monitoring tools offered by Gartner is here.

The comparison mentioned above can be viewed here with De Fort-Menares' comments:

 


Network Topology Mapping Tools

Application-aware Network Performance Monitoring Tools

Primary Data Source

CLI, SNMP, CDP/LLDP

NetFlow, SNMP, Packet Capture & CLI

Data Collection Approach

Pull & on demand

Push & always-on mode of monitoring

Troubleshooting Approach

Build a model of how the network is constructed. Compare configuration files and output of show commands to identify changes that may have caused the problem.

Report on observations from the network &

reflect what is truly happening in the network.

Primary purpose for the topology diagram

Automate network documentation. Automatically detect any changes in the network and keep the topology diagram up to date.

Overlay real application traffic on top of the topology diagram.

Path Analysis

Typically interrogate the path between a pair of IP addresses using the model built from CLI & SNMP information.

Visualize all the traffic flows over multiple paths.

The LiveAction executive states that "there is a perception that router-based traffic-flow collection and analysis is impractical to turn on at every interface and device in the network leading to blind spots. In reality, it is not necessary to turn on flow collection everywhere although the more observation points you have, the better the visibility. It is also increasingly not possible to enable flow collection and analysis at every node due to administrative control issues with managed services and the Internet. A model-based network topology mapping tool is going to have a hard time dealing with this kind of black hole of information with no CLI nor SNMP access to the network elements, whereas a traffic measurement centric view is able to stitch together a picture from the disparate parts."

After many years in the networking industry, with hands-on experience and various patents, she concludes that "network performance monitoring tools that collect traffic flows (e.g. NetFlow, IPFIX, SFlow) provide much greater insight in what is happening in your network. One of the primary reasons for deploying network performance monitoring tools is to gain insight into the quality of the end user's experience and how applications are performing. This key capability is lacking in a network topology mapping tool. Quite simply put, network topology mapping is not a monitoring tool!" To register for the LiveAction User Conference (dinner and a San Francisco Giants ticket included), De Fort-Menares invites you to go to http://liveaction.com/livex/.

More Stories By Georgiana Comsa

Georgiana Comsa is the founder of Silicon Valley PR, a PR agency with a unique focus on the data infrastructure markets. Georgiana's decision to found Silicon Valley PR was based on her own experience as a corporate PR professional working with other PR agencies; she noticed that there was a need for a specialized, rather than a general tech PR firm, with media, analyst, and vendor relationships that would benefit its clients. With Silicon Valley PR, companies get to leverage the power of traditional and digital media relations to generate highly targeted press coverage, contributing to tangible business wins, which help them launch and grow their businesses.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and cost-effective resources on AWS, coupled with the ability to deliver a minimum set of functionalities that cover the majority of needs – without configuration complexity.
As Marc Andreessen says software is eating the world. Everything is rapidly moving toward being software-defined – from our phones and cars through our washing machines to the datacenter. However, there are larger challenges when implementing software defined on a larger scale - when building software defined infrastructure. In his session at 16th Cloud Expo, Boyan Ivanov, CEO of StorPool, provided some practical insights on what, how and why when implementing "software-defined" in the datacenter.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.