Welcome!

@DevOpsSummit Authors: Pat Romanski, Liz McMillan, Elizabeth White, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Java IoT, @DXWorldExpo

@DevOpsSummit: Blog Post

Automate Enterprise App Monitoring By @MEtmajer | @DevOpsSummit [#DevOps]

Enterprise applications are typically segregated into multiple, physically & logically independent functional areas called tiers

How to Automate Enterprise Application Monitoring with Ansible

In my last article, Top DevOps Tools We Love, I proclaimed the availability of deployment automation scripts for our Dynatrace products. Today, you will learn about the basic concepts behind Ansible, which will enable you to roll your own playbook for integrating insightful end-to-end monitoring with Dynatrace into every part of your enterprise application in under 60 seconds.

Enterprise Applications and Distributed Architectures
Enterprise applications are typically segregated into multiple, physically and logically independent, functional areas called tiers. As opposed to single-tiered monoliths, the multi-tier pattern enforces a distributed architecture that allows for enhanced maintainability and greater scalability, resulting in increased availability, resilience and performance. Typically, enterprise applications adhere to the classical 3-tier architecture: presentation tier, business logic tier and data tier:

Typical 3-Tier Pattern with an exemplary "flow"

Their logical and physical separateness makes distributed applications available for scale-out (also referred to as horizontal scaling). This means that a tier itself gets distributed across multiple physical servers to effectively distribute the load that otherwise needs to be handled by a single node. In such a scenario, communication between adjacent tiers is handled by load balancers which relay incoming requests to a particular node in the cluster:

Example of a Load-Balanced Cluster

Distributed Architectures: The Traceability Dilemma
Distributed architectures have undeniable complexities. One matter that is often undervalued is end-to-end traceability: how to determine data flows in a multi-tiered environment that spans multiple servers? As I have described in an earlier article on the good parts and the not-so-good parts of logging, using only traditional logging, you most probably can't. Good application monitoring solutions, such as Dynatrace, on the other hand, allow you to figure out exactly what your users were doing throughout their visits and assess the performance of your multi-tiered application on whichever nodes involved, as described in How to Approach Application Failures in Production.

Now, what does it take to obtain these insights? Agent based application monitoring solutions like Dynatrace, AppDynamics and New Relic, require you to load an agent library with your application runtime, such as the Java Runtime Environment (JRE) or the .NET Common Language Runtime (.NET CLR).

An example: monitoring a Java process with Dynatrace requires you to load our native agent through the java command's -agentpath:pathname[=options] option and additionally specify the name under which the agent shall register itself at the Dynatrace Collector together with the hostname of the latter, as shown below. More often than not, the -agentpath string is provided to a Java process via some environment variable, such as JAVA_OPTS.

-agentpath:/opt/dynatrace/agent/lib/libdtagent.so=name=java-agent,collector=dynatrace.company.com

While this can easily be accomplished for 1 or 2 agents, what would companies do who need to deploy hundreds to thousands of agents? Exactly, automate all that stuff away! Even if all you need is a handful of agents, automation will help you to produce predictable outcomes in all your environments. And if the automation tool doesn't get into your way, even better. This is where Ansible comes into play!

Introduction to Ansible
Ansible is a radically simple, yet powerful, IT automation engine for environment and infrastructure provisioning, configuration management, application deployment and much more. Its fresh, new agentless approach allows you to effectively orchestrate entire environments without the need to install any prerequisite dependencies on the machines under management - all you need is SSH for Linux hosts or WinRM for Windows hosts as transport layer. Ansible comes with "batteries included": more than 250 modules are bundled with Ansible's core that allow you to get even complex automation projects done without the need for custom scripting.

Concept #1: Inventories
Ansible provisions groups of servers at once. Groups, such as groups of web servers, application servers and database servers, are defined in an inventory. Typically, an inventory is a text file that is expressed in an INI-like format that could look like so (note the use of numeric and alphabetic ranges):

# file: production
[balancers]
www.example.com

[webservers]
www[0-1].example.com

[appservers]
app[a:c].example.com

[databases]
db.example.com

[monitoring]
dynatrace.example.com

More information on Ansible inventories can be found in the official documentation.

Concept #2: Playbooks
In Ansible, playbooks define the policies your machines under management shall enforce. They are the place for you to lay out your tasks by referencing Ansible modules. The following example playbook installs the Apache Tomcat application server using the apt module with the package name tomcat7 on all hosts that belong to the group appservers, as defined in an inventory. Ansible will try to connect to each machine via SSH, using the username deploy in this case. Ansible playbooks are written in YAML.

# file: appservers.yml
---
- hosts: appservers
tasks:
- name: Install Apache Tomcat
apt: name=tomcat7
remote_user: deploy
sudo: yes

An Ansible playbook is executed via the ansible-playbook -i <inventory> <playbook.yml> command. Hence, we could execute our playbook like so: ansible-playbook -i production appservers.yml. More information on Ansible playbooks can be found in the official documentation.

For concept #3, click here for the full article

More Stories By Martin Etmajer

Leveraging his outstanding technical skills as a lead software engineer, Martin Etmajer has been a key contributor to a number of large-scale systems across a range of industries. He is as passionate about great software as he is about applying Lean Startup principles to the development of products that customers love.

Martin is a life-long learner who frequently speaks at international conferences and meet-ups. When not spending time with family, he enjoys swimming and Yoga. He holds a master's degree in Computer Engineering from the Vienna University of Technology, Austria, with a focus on dependable distributed real-time systems.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be looking at some significant engineering investment. On-demand, serverless computing enables developers to try out a fleet of devices on IoT gateways with ease. With a sensor simulator built on top of AWS Lambda, it’s possible to elastically generate device sensors that report their state to the cloud.