Welcome!

@DevOpsSummit Authors: Elizabeth White, Liz McMillan, Kevin Jackson, Pat Romanski, Tim Hinds

Related Topics: @DevOpsSummit, Java IoT, @DXWorldExpo

@DevOpsSummit: Blog Post

Automate Enterprise App Monitoring By @MEtmajer | @DevOpsSummit [#DevOps]

Enterprise applications are typically segregated into multiple, physically & logically independent functional areas called tiers

How to Automate Enterprise Application Monitoring with Ansible

In my last article, Top DevOps Tools We Love, I proclaimed the availability of deployment automation scripts for our Dynatrace products. Today, you will learn about the basic concepts behind Ansible, which will enable you to roll your own playbook for integrating insightful end-to-end monitoring with Dynatrace into every part of your enterprise application in under 60 seconds.

Enterprise Applications and Distributed Architectures
Enterprise applications are typically segregated into multiple, physically and logically independent, functional areas called tiers. As opposed to single-tiered monoliths, the multi-tier pattern enforces a distributed architecture that allows for enhanced maintainability and greater scalability, resulting in increased availability, resilience and performance. Typically, enterprise applications adhere to the classical 3-tier architecture: presentation tier, business logic tier and data tier:

Typical 3-Tier Pattern with an exemplary "flow"

Their logical and physical separateness makes distributed applications available for scale-out (also referred to as horizontal scaling). This means that a tier itself gets distributed across multiple physical servers to effectively distribute the load that otherwise needs to be handled by a single node. In such a scenario, communication between adjacent tiers is handled by load balancers which relay incoming requests to a particular node in the cluster:

Example of a Load-Balanced Cluster

Distributed Architectures: The Traceability Dilemma
Distributed architectures have undeniable complexities. One matter that is often undervalued is end-to-end traceability: how to determine data flows in a multi-tiered environment that spans multiple servers? As I have described in an earlier article on the good parts and the not-so-good parts of logging, using only traditional logging, you most probably can't. Good application monitoring solutions, such as Dynatrace, on the other hand, allow you to figure out exactly what your users were doing throughout their visits and assess the performance of your multi-tiered application on whichever nodes involved, as described in How to Approach Application Failures in Production.

Now, what does it take to obtain these insights? Agent based application monitoring solutions like Dynatrace, AppDynamics and New Relic, require you to load an agent library with your application runtime, such as the Java Runtime Environment (JRE) or the .NET Common Language Runtime (.NET CLR).

An example: monitoring a Java process with Dynatrace requires you to load our native agent through the java command's -agentpath:pathname[=options] option and additionally specify the name under which the agent shall register itself at the Dynatrace Collector together with the hostname of the latter, as shown below. More often than not, the -agentpath string is provided to a Java process via some environment variable, such as JAVA_OPTS.

-agentpath:/opt/dynatrace/agent/lib/libdtagent.so=name=java-agent,collector=dynatrace.company.com

While this can easily be accomplished for 1 or 2 agents, what would companies do who need to deploy hundreds to thousands of agents? Exactly, automate all that stuff away! Even if all you need is a handful of agents, automation will help you to produce predictable outcomes in all your environments. And if the automation tool doesn't get into your way, even better. This is where Ansible comes into play!

Introduction to Ansible
Ansible is a radically simple, yet powerful, IT automation engine for environment and infrastructure provisioning, configuration management, application deployment and much more. Its fresh, new agentless approach allows you to effectively orchestrate entire environments without the need to install any prerequisite dependencies on the machines under management - all you need is SSH for Linux hosts or WinRM for Windows hosts as transport layer. Ansible comes with "batteries included": more than 250 modules are bundled with Ansible's core that allow you to get even complex automation projects done without the need for custom scripting.

Concept #1: Inventories
Ansible provisions groups of servers at once. Groups, such as groups of web servers, application servers and database servers, are defined in an inventory. Typically, an inventory is a text file that is expressed in an INI-like format that could look like so (note the use of numeric and alphabetic ranges):

# file: production
[balancers]
www.example.com

[webservers]
www[0-1].example.com

[appservers]
app[a:c].example.com

[databases]
db.example.com

[monitoring]
dynatrace.example.com

More information on Ansible inventories can be found in the official documentation.

Concept #2: Playbooks
In Ansible, playbooks define the policies your machines under management shall enforce. They are the place for you to lay out your tasks by referencing Ansible modules. The following example playbook installs the Apache Tomcat application server using the apt module with the package name tomcat7 on all hosts that belong to the group appservers, as defined in an inventory. Ansible will try to connect to each machine via SSH, using the username deploy in this case. Ansible playbooks are written in YAML.

# file: appservers.yml
---
- hosts: appservers
tasks:
- name: Install Apache Tomcat
apt: name=tomcat7
remote_user: deploy
sudo: yes

An Ansible playbook is executed via the ansible-playbook -i <inventory> <playbook.yml> command. Hence, we could execute our playbook like so: ansible-playbook -i production appservers.yml. More information on Ansible playbooks can be found in the official documentation.

For concept #3, click here for the full article

More Stories By Martin Etmajer

Leveraging his outstanding technical skills as a lead software engineer, Martin Etmajer has been a key contributor to a number of large-scale systems across a range of industries. He is as passionate about great software as he is about applying Lean Startup principles to the development of products that customers love.

Martin is a life-long learner who frequently speaks at international conferences and meet-ups. When not spending time with family, he enjoys swimming and Yoga. He holds a master's degree in Computer Engineering from the Vienna University of Technology, Austria, with a focus on dependable distributed real-time systems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
"Storpool does only block-level storage so we do one thing extremely well. The growth in data is what drives the move to software-defined technologies in general and software-defined storage," explained Boyan Ivanov, CEO and co-founder at StorPool, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the public cloud best suits your organization, and what the future holds for operations and infrastructure engineers in a post-container world. Is a serverless world inevitable?
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today's digital transformation.