Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Liz McMillan, Elizabeth White

Related Topics: @DevOpsSummit, Java IoT, @CloudExpo

@DevOpsSummit: Article

Terminology Nerd War By @HoardingInfo | @DevOpsSummit [#DevOps]

It turns out your background is important to your interpretation of the DevOps lingo

Terminology Nerd War: APM, Log Analysis & More
by Chris Riley

Just the other day I was hanging out with my developer buddy. We entered what we thought would be an interesting topic on how you cannot call an environment "DevOps" without analytics.

But we soon were in a nerd war on what a term meant. Yes, this is what I talk about in my free time.

In the thick of it, we both used the term "Server Monitoring." But neither of us were talking about the same thing. I was referring to log management and analysis, and he was referring to application performance monitoring (APM). No wonder the DevOps market is confused. But the good news is, once we realized our mistake, we agreed that both APM and log analysis are critical and beneficial to the DevOps practice.

It turns out your background is important to your interpretation of the DevOps lingo. There are basically four points of view: front-end developers, back-end developers, QA, and IT. They all speak the same language, but with different dialects. And the developer dialect is the furthest from IT's. This is where the differences between APM and log analysis are confusing, but at the same time made more clear.

Log Analysis vs. APM?

Developers
Developers are all about the software layer. And thus when they think analytics, they think mostly of analytics for the application. Which means, the "server" to them is the Web Server, not the VM that the stack is running on. When you talk about analysis, even if it is a log analysis platform and not APM being used, what they find the most important to them is getting data about the application operation, users, and functionality.

IT Operations
To IT the "server," is at a minimum, the hypervisor, but could even be bare metal. IT keeps the VMs up and running, and their perception of the application is from the server up. They tend to think of server monitoring as OS events and system level performance. They want to know about processes, bandwidth, pegged disks, etc. As they work up the stack to the application layer, their interest is mostly focused on how activity and functionality will impact servers and uptime.

Both are right and wrong at the same time. But mostly both are wrong because they do not take the time to understand each other. Which leads to some interesting conversations that go nowhere. The result is a common vice committed across entire teams: choosing one tool for the job. This actually is feasible from IT's perspective because log analysis has the unique ability to monitor and analyze across all layers of the application and all processes to support it.

APM
As we already alluded too, APM is the application layer only. In its simplest terms it breaks down to measuring the time for each http(s) request or post, and who made the request or post. But it goes further to an abstracted higher level value that views how application functionality either degrades or improves performance, and all user and application data over long periods of time. What is even more confusing is when you add in the term "load testing" (which is not APM either) because load testing is focused on the pre-release stages of development. It is executed by simulating connections to the application which APM does not do, but can monitor. Generally APM has not expanded to look at application data in the earlier stages of the pipeline. Such as QA, continuous integration or delivery (deployment is a different story).

Log Analysis
The great thing about log analysis is that it is like a warm blanket for the entire DevOps process. You can log anything and everything. From bare metal (used in Software Defined Data Centers), hypervisor, virtual machine (most common), and application. And not just your application, all the component applications around the entire process as well. This includes release management, IDE, and other services contributing to the creation of the application.

The other nice part about log analysis is that the more data you log, the more consistent your language is for talking about the environment.

The problem with getting the two to work together in sync is not just the confusion around them, it is also in sharing. Neither IT nor developers like to share. So while IT might set up APM, the developers hoard it, and vice versa IT might hoard log analysis. This commonly results in both teams getting the tools, but just so they can have their own castle.

Some organizations are more progressive. IT might deliver data from, or provide access to, the log analysis platform. Developers then find they can pretty much get, and share, all the data they need from the application layer, as well as it's relationship to server data.

My vote is to have both. But I also have the perspective that you should do what you can to get them in the same place. Otherwise having two platforms means that the language barrier is carried forward and communication still doesn't improve. DevOps is about breaking down walls, not building them up.

Integrations like Log Entries and New Relic are magic. No matter where you are, data is consistent, shared, and language is unified.

And once it is set up there really is no idea of where information is, how it is communication, or who owns what.

The next time you start throwing around DevOps terminology, make sure you are talking about the same thing. And when it comes to server monitoring, allow log analysis to be the system of record for all data, and APM to do what it does best, understanding your users.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
The current environment of Continuous Disruption requires companies to transform how they work and how they engineer their products. Transformations are notoriously hard to execute, yet many companies have succeeded. What can we learn from them? Can we produce a blueprint for a transformation? This presentation will cover several distinct approaches that companies take to achieve transformation. Each approach utilizes different levers and comes with its own advantages, tradeoffs, costs, risks, and outcomes.
Contino is a global technical consultancy that helps highly-regulated enterprises transform faster, modernizing their way of working through DevOps and cloud computing. They focus on building capability and assisting our clients to in-source strategic technology capability so they get to market quickly and build their own innovation engine.
You want to start your DevOps journey but where do you begin? Do you say DevOps loudly 5 times while looking in the mirror and it suddenly appears? Do you hire someone? Do you upskill your existing team? Here are some tips to help support your DevOps transformation. Conor Delanbanque has been involved with building & scaling teams in the DevOps space globally. He is the Head of DevOps Practice at MThree Consulting, a global technology consultancy. Conor founded the Future of DevOps Thought Leaders Debate. He regularly supports and sponsors Meetup groups such as DevOpsNYC and DockerNYC.
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the benefits of the cloud without losing performance as containers become the new paradigm.