Welcome!

@DevOpsSummit Authors: Liz McMillan, Elizabeth White, Pat Romanski, Yeshim Deniz, Flint Brenton

Related Topics: @DevOpsSummit, Java IoT, Microservices Expo, Linux Containers, FinTech Journal

@DevOpsSummit: Blog Post

Log Analysis and Automated Orchestration | @DevOpsSummit [#DevOps]

Log analysis is a simple and cool way to add oversight to your orchestration environments

Log Analysis Takes Automated Orchestration Further

Log analysis is a simple and cool way to add oversight to your orchestration environments.

While scripting your infrastructure is a powerful tool, using it alone is a quick-hit device.

But wait, there is more! You don’t even have to create the scripts manually; there are tools out there that will scan your base environment, and make the scripts for you!

Register For DevOps Summit "FREE" (before Friday) ▸ Here

How do you manage the script runs? How do you know which version of which script was used for which deployment of the infrastructure?

If you don’t, you are wasting time by trying to remember which scripts were run. Maybe you have created unmanageable spreadsheets to track. But lets face it, this approach creates entries you never look at again. And when something goes wrong, you have to guess in which script the problem exists in order to correct it.

You can start simple, by storing your scripts in Git.

Get some versioning at least. But, you still might not know anything about their runs, and how those versions correlate to your infrastructure activity, or more important issues. It would nice to know that an old script was run for a particular machine. And how that script correlated to some issue e.g. because it installed the wrong version of Apache. I’m sure you can imagine the possible scenarios.

You can do this, but maybe not in the way you expected.

During your deploy, and for each script run, add simple calls to your log system that record the run date and time, the version, and any other metadata associated with the deployment. With this log data, you now have a new management tool.

The effort is minimal; the impact is fantastic.

The entire Operations team can now know, for example, which script versions are currently in production. Which versions might not match other scripts runs. And if something goes wrong in the infrastructure, which script and run date was a part of it. You can know what the issue is and address it more quickly.

You can also utilize machine-learning capabilities like anomaly detection to find where a script that was run does not match the previous sequence of script runs. Or Inactivity Alerting to know when something expected did not happen.

If you are using your log analysis platform for application data as well, you will be able to more quickly identify if a current issue is related to backend, infrastructure, or front-end.

By using log analysis, and simple calls to the log analysis platform, on every infrastructure deployment, you’ll have instant visibility, real-time dashboards, and smart logic on infrastructure deployments.

More Stories By Trevor Parsons

Trevor Parsons is Chief Scientist and Co-founder of Logentries. Trevor has over 10 years experience in enterprise software and, in particular, has specialized in developing enterprise monitoring and performance tools for distributed systems. He is also a research fellow at the Performance Engineering Lab Research Group and was formerly a Scientist at the IBM Center for Advanced Studies. Trevor holds a PhD from University College Dublin, Ireland.

@DevOpsSummit Stories
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that enables everyone from Planning-to-Ops to make informed decisions based on business priority and leverage automation to accelerate identifying issues and fast fix to drive continuous feedback and KPI insight.
It is ironic, but perhaps not unexpected, that many organizations who want the benefits of using an Agile approach to deliver software use a waterfall approach to adopting Agile practices: they form plans, they set milestones, and they measure progress by how many teams they have engaged. Old habits die hard, but like most waterfall software projects, most waterfall-style Agile adoption efforts fail to produce the results desired. The problem is that to get the results they want, they have to change their culture and cultures are very hard to change. To paraphrase Peter Drucker, "culture eats Agile for breakfast." Successful approaches are opportunistic and leverage the power of self-organization to achieve lasting change.
For far too long technology teams have lived in siloes. Not only physical siloes, but cultural siloes pushed by competing objectives. This includes informational siloes where business users require one set of data and tech teams require different data. DevOps intends to bridge these gaps to make tech driven operations more aligned and efficient.
"Venafi has a platform that allows you to manage, centralize and automate the complete life cycle of keys and certificates within the organization," explained Gina Osmond, Sr. Field Marketing Manager at Venafi, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
DXWorldEXPO LLC announced today that the upcoming DXWorldEXPO | CloudEXPO New York event will feature 10 companies from Poland to participate at the "Poland Digital Transformation Pavilion" on November 12-13, 2018.