Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Liz McMillan, Yeshim Deniz, Zakia Bouachraoui

Related Topics: @DevOpsSummit, Java IoT, @DXWorldExpo

@DevOpsSummit: Blog Post

Automate Enterprise App Monitoring By @MEtmajer | @DevOpsSummit [#DevOps]

Enterprise applications are typically segregated into multiple, physically & logically independent functional areas called tiers

How to Automate Enterprise Application Monitoring with Ansible

In my last article, Top DevOps Tools We Love, I proclaimed the availability of deployment automation scripts for our Dynatrace products. Today, you will learn about the basic concepts behind Ansible, which will enable you to roll your own playbook for integrating insightful end-to-end monitoring with Dynatrace into every part of your enterprise application in under 60 seconds.

Enterprise Applications and Distributed Architectures
Enterprise applications are typically segregated into multiple, physically and logically independent, functional areas called tiers. As opposed to single-tiered monoliths, the multi-tier pattern enforces a distributed architecture that allows for enhanced maintainability and greater scalability, resulting in increased availability, resilience and performance. Typically, enterprise applications adhere to the classical 3-tier architecture: presentation tier, business logic tier and data tier:

Typical 3-Tier Pattern with an exemplary "flow"

Their logical and physical separateness makes distributed applications available for scale-out (also referred to as horizontal scaling). This means that a tier itself gets distributed across multiple physical servers to effectively distribute the load that otherwise needs to be handled by a single node. In such a scenario, communication between adjacent tiers is handled by load balancers which relay incoming requests to a particular node in the cluster:

Example of a Load-Balanced Cluster

Distributed Architectures: The Traceability Dilemma
Distributed architectures have undeniable complexities. One matter that is often undervalued is end-to-end traceability: how to determine data flows in a multi-tiered environment that spans multiple servers? As I have described in an earlier article on the good parts and the not-so-good parts of logging, using only traditional logging, you most probably can't. Good application monitoring solutions, such as Dynatrace, on the other hand, allow you to figure out exactly what your users were doing throughout their visits and assess the performance of your multi-tiered application on whichever nodes involved, as described in How to Approach Application Failures in Production.

Now, what does it take to obtain these insights? Agent based application monitoring solutions like Dynatrace, AppDynamics and New Relic, require you to load an agent library with your application runtime, such as the Java Runtime Environment (JRE) or the .NET Common Language Runtime (.NET CLR).

An example: monitoring a Java process with Dynatrace requires you to load our native agent through the java command's -agentpath:pathname[=options] option and additionally specify the name under which the agent shall register itself at the Dynatrace Collector together with the hostname of the latter, as shown below. More often than not, the -agentpath string is provided to a Java process via some environment variable, such as JAVA_OPTS.

-agentpath:/opt/dynatrace/agent/lib/libdtagent.so=name=java-agent,collector=dynatrace.company.com

While this can easily be accomplished for 1 or 2 agents, what would companies do who need to deploy hundreds to thousands of agents? Exactly, automate all that stuff away! Even if all you need is a handful of agents, automation will help you to produce predictable outcomes in all your environments. And if the automation tool doesn't get into your way, even better. This is where Ansible comes into play!

Introduction to Ansible
Ansible is a radically simple, yet powerful, IT automation engine for environment and infrastructure provisioning, configuration management, application deployment and much more. Its fresh, new agentless approach allows you to effectively orchestrate entire environments without the need to install any prerequisite dependencies on the machines under management - all you need is SSH for Linux hosts or WinRM for Windows hosts as transport layer. Ansible comes with "batteries included": more than 250 modules are bundled with Ansible's core that allow you to get even complex automation projects done without the need for custom scripting.

Concept #1: Inventories
Ansible provisions groups of servers at once. Groups, such as groups of web servers, application servers and database servers, are defined in an inventory. Typically, an inventory is a text file that is expressed in an INI-like format that could look like so (note the use of numeric and alphabetic ranges):

# file: production
[balancers]
www.example.com

[webservers]
www[0-1].example.com

[appservers]
app[a:c].example.com

[databases]
db.example.com

[monitoring]
dynatrace.example.com

More information on Ansible inventories can be found in the official documentation.

Concept #2: Playbooks
In Ansible, playbooks define the policies your machines under management shall enforce. They are the place for you to lay out your tasks by referencing Ansible modules. The following example playbook installs the Apache Tomcat application server using the apt module with the package name tomcat7 on all hosts that belong to the group appservers, as defined in an inventory. Ansible will try to connect to each machine via SSH, using the username deploy in this case. Ansible playbooks are written in YAML.

# file: appservers.yml
---
- hosts: appservers
tasks:
- name: Install Apache Tomcat
apt: name=tomcat7
remote_user: deploy
sudo: yes

An Ansible playbook is executed via the ansible-playbook -i <inventory> <playbook.yml> command. Hence, we could execute our playbook like so: ansible-playbook -i production appservers.yml. More information on Ansible playbooks can be found in the official documentation.

For concept #3, click here for the full article

More Stories By Martin Etmajer

Leveraging his outstanding technical skills as a lead software engineer, Martin Etmajer has been a key contributor to a number of large-scale systems across a range of industries. He is as passionate about great software as he is about applying Lean Startup principles to the development of products that customers love.

Martin is a life-long learner who frequently speaks at international conferences and meet-ups. When not spending time with family, he enjoys swimming and Yoga. He holds a master's degree in Computer Engineering from the Vienna University of Technology, Austria, with a focus on dependable distributed real-time systems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
In an era of historic innovation fueled by unprecedented access to data and technology, the low cost and risk of entering new markets has leveled the playing field for business. Today, any ambitious innovator can easily introduce a new application or product that can reinvent business models and transform the client experience. In their Day 2 Keynote at 19th Cloud Expo, Mercer Rowe, IBM Vice President of Strategic Alliances, and Raejeanne Skillern, Intel Vice President of Data Center Group and GM, discussed how clients in this new era of innovation can apply data, technology, plus human ingenuity to springboard to advance new business value and opportunities.
Discussions of cloud computing have evolved in recent years from a focus on specific types of cloud, to a world of hybrid cloud, and to a world dominated by the APIs that make today's multi-cloud environments and hybrid clouds possible. In this Power Panel at 17th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the importance of customers being able to use the specific technologies they need, through environments and ecosystems that expose their APIs to make true change and transformation possible.
The current age of digital transformation means that IT organizations must adapt their toolset to cover all digital experiences, beyond just the end users’. Today’s businesses can no longer focus solely on the digital interactions they manage with employees or customers; they must now contend with non-traditional factors. Whether it's the power of brand to make or break a company, the need to monitor across all locations 24/7, or the ability to proactively resolve issues, companies must adapt to the new world.
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.