Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Liz McMillan, Pat Romanski, Elizabeth White, Stackify Blog

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

Automated versus Controlled By @DMacVittie | @DevOpsSummit #DevOps

Traditionally, the "review and approve" step is part and parcel of automation

In the currently high-flying world of DevOps, automation is the king. We are "automating all of the things" at a pretty astounding rate, and it is increasing productivity by reducing the man-hours invested in repetitive work. All very cool, but there is a dark side to automation, and that's control. While automating the entire datacenter is very cool, immature products sometimes lack the ability to let operators step in and approve changes before rolling them out.

Traditionally, the "review and approve" step is part and parcel of automation. New "just do it" tools are normally equipped with a review step to increase their appeal to datacenter teams during the verification and acceptance phase of rollout.

But DevOps got ahead of itself, and some vendors, wanting to move faster than competitors, started building their tools to take action by default, and not provide for that moment of human oversight. This probably works well for the majority of changes, but lacking that "brake", the train can get derailed. Outages in several large web providers that were cutting-edge DevOps shops certainly showed how that can happen. In one case I'm familiar with, the original error was a human error, but propagation happened so fast that before the human error could be fixed, thousands of customers were left with service interruptions, and cleaning up was slowed and even muddied by the automated systems re-propagating the misconfigurations.

So do we stop automating, and roll back DevOps? Absolutely not. But we should proceed with a bit of caution, and we should keep an eye on the ability of our tools to stop and let a human check things out. And of course, we need to make certain we have those check-points designed in, and there are responsible humans doing that checking.

The idea is simple - let the system do the work in an automated fashion, but keep operators in a position to give go/no-go decisions on the changes being made. That can be a range from "the system only does what it's told" - like the Stacki project, which I'm familiar with, that will install hundreds of machines at once, but install them the way humans (via spreadsheet normally, but also command line or automated tool) tell it to. Or via a "stop and review" step presented by a project like TerraForm by HashiCorp, which includes a "plan" step that allows operators to see exactly what steps will be taken to bring infrastructure to the desired state.

In the case of Stacki with spreadsheets, the desired configuration for hundreds or thousands of machines can be placed into the spreadsheet and reviewed by any number of people before it is applied. This offers teams the ability to make certain that major infrastructure build-outs or changes are solid and won't cause massive problems, while allowing automation to do all of the repetitive heavy lifting of configuring RAID and installing the operating systems. In the case of TerraForm, the "plan" step allows an experienced operator to quickly review what would be done and compare it to what they feel needs to be done before the changes are applied.

In both cases, automation is saving man-hours, but control remains in the hands of operations teams that need to insure the smooth execution of changes across the datacenter.

There are other tools (indeed, most tools have some capability, whether on by default or not) that allow for these and other types of checking, but the check-points need to be taken advantage of. Make certain staff is looking into the critical sections of automation and making certain that you are not merely increasing the velocity of destruction.

As most DevOps pros will tell you, planning ahead and including check-points in the process can help a lot both in protecting critical infrastructure and in easing the transition to an automated environment. Some projects will have corner cases and bugs that must be found, and check-points keep those problems from reaching production before they are fixed. In longer term projects, review only when the system has significant changes - or the system is introducing significant changes to the overall datacenter - is sufficient.

Automation and DevOps make systems overall more stable while also increasing agility (they're not mutually exclusive... A good automation system just requires good inputs to increase agility while maintaining uptime), so it just takes a little caution to make certain you're getting the most from the process. Some foresight in tool selection, and some more in the process determination, and you're set on a path that will save tons of man-hours without increasing overall risk to the organization. That's a good thing. Knowing what you want to check - just like knowing what you want to monitor - is the hard part, but some thought can bring you back with a list of high-risk activities that you need to check before pushing them out. Then add highly customized activities, like white-listing if including security in your DevOps environment, or configuring RAID controllers, and you've got your basic list of check-points.

Then go do cool stuff with all the free time it generates (after the initial investment, you don't get free time for free)... And keep kicking it.

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that enables everyone from Planning-to-Ops to make informed decisions based on business priority and leverage automation to accelerate identifying issues and fast fix to drive continuous feedback and KPI insight.
Headquartered in Plainsboro, NJ, Synametrics Technologies has provided IT professionals and computer systems developers since 1997. Based on the success of their initial product offerings (WinSQL and DeltaCopy), the company continues to create and hone innovative products that help its customers get more from their computer applications, databases and infrastructure. To date, over one million users around the world have chosen Synametrics solutions to help power their accelerated business or personal computing needs.
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.