Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui, Elizabeth White

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Linux Containers, FinTech Journal

@DevOpsSummit: Blog Post

Log Analysis and Automated Orchestration | @DevOpsSummit [#DevOps]

Log analysis is a simple and cool way to add oversight to your orchestration environments

Log Analysis Takes Automated Orchestration Further

Log analysis is a simple and cool way to add oversight to your orchestration environments.

While scripting your infrastructure is a powerful tool, using it alone is a quick-hit device.

But wait, there is more! You don’t even have to create the scripts manually; there are tools out there that will scan your base environment, and make the scripts for you!

Register For DevOps Summit "FREE" (before Friday) ▸ Here

How do you manage the script runs? How do you know which version of which script was used for which deployment of the infrastructure?

If you don’t, you are wasting time by trying to remember which scripts were run. Maybe you have created unmanageable spreadsheets to track. But lets face it, this approach creates entries you never look at again. And when something goes wrong, you have to guess in which script the problem exists in order to correct it.

You can start simple, by storing your scripts in Git.

Get some versioning at least. But, you still might not know anything about their runs, and how those versions correlate to your infrastructure activity, or more important issues. It would nice to know that an old script was run for a particular machine. And how that script correlated to some issue e.g. because it installed the wrong version of Apache. I’m sure you can imagine the possible scenarios.

You can do this, but maybe not in the way you expected.

During your deploy, and for each script run, add simple calls to your log system that record the run date and time, the version, and any other metadata associated with the deployment. With this log data, you now have a new management tool.

The effort is minimal; the impact is fantastic.

The entire Operations team can now know, for example, which script versions are currently in production. Which versions might not match other scripts runs. And if something goes wrong in the infrastructure, which script and run date was a part of it. You can know what the issue is and address it more quickly.

You can also utilize machine-learning capabilities like anomaly detection to find where a script that was run does not match the previous sequence of script runs. Or Inactivity Alerting to know when something expected did not happen.

If you are using your log analysis platform for application data as well, you will be able to more quickly identify if a current issue is related to backend, infrastructure, or front-end.

By using log analysis, and simple calls to the log analysis platform, on every infrastructure deployment, you’ll have instant visibility, real-time dashboards, and smart logic on infrastructure deployments.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
Hackers took three days to identify and exploit a known vulnerability in Equifax’s web applications. I will share new data that reveals why three days (at most) is the new normal for DevSecOps teams to move new business /security requirements from design into production. This session aims to enlighten DevOps teams, security and development professionals by sharing results from the 4th annual State of the Software Supply Chain Report -- a blend of public and proprietary data with expert research and analysis.Attendees can join this session to better understand how DevSecOps teams are applying lessons from W. Edwards Deming (circa 1982), Malcolm Goldrath (circa 1984) and Gene Kim (circa 2013) to improve their ability to respond to new business requirements and cyber risks.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, software-defined solution with rich machine intelligence.
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like "How is my application doing" but no idea how to get a proper answer.
This session will provide an introduction to Cloud driven quality and transformation and highlight the key features that comprise it. A perspective on the cloud transformation lifecycle, transformation levers, and transformation framework will be shared. At Cognizant, we have developed a transformation strategy to enable the migration of business critical workloads to cloud environments. The strategy encompasses a set of transformation levers across the cloud transformation lifecycle to enhance process quality, compliance with organizational policies and implementation of information security and data privacy best practices. These transformation levers cover core areas such as Cloud Assessment, Governance, Assurance, Security and Performance Management. The transformation framework presented during this session will guide corporate clients in the implementation of a successful cloud solu...
So the dumpster is on fire. Again. The site's down. Your boss's face is an ever-deepening purple. And you begin debating whether you should join the #incident channel or call an ambulance to deal with his impending stroke. Yes, we know this is a developer's fault. There's plenty of time for blame later. Postmortems have a macabre name because they were once intended to be Viking-like funerals for someone's job. But we're civilized now. Sort of. So we call them post-incident reviews. Fires are never going to stop. We're human. We miss bugs. Or we fat finger a command - deleting dozens of servers and bringing down S3 in US-EAST-1 for hours - effectively halting the internet. These things happen.