@DevOpsSummit Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Linux Containers, Containers Expo Blog

@DevOpsSummit: Blog Feed Post

DevOps and Automation | @DevOpsSummit #DevOps #ML #Microservices

Because of the explosive growth of DevOps, there is still a seemingly large amount of confusion on the topic

Because of the explosive growth of DevOps, there is still a seemingly large amount of confusion on the topic. While I’ve written about this before, this article takes a shot at dividing things in a way that we naturally do as users, but vendors often fail to differentiate, simply because “DevOps” is a hot term, and it’s all too often about SEO and AdWords when vendors talk/write.

Dev? Or Ops?
DevOps has two different sources – Development that is getting Ops added in – like automated test, integration, etc., and Ops that is getting Development added in – like provisioning and configuration management. While both sides of IT are used in both, their genesis is different, and responsibility in all but a few organizations is different.


Since discussing all three boxes would become more of a treatise than a blog, today we’ll focus on devOPS, and what each of these is actually for.

devOPS is where development methodologies (and often actual development) are being added or enhanced with regard to Operations traditional practices to create an automated datacenter for deployment and maintenance.

Server Provisioning
Automated spin-up of servers, be they physical or virtual, with a customizable image set and configuration options. Repeatability is key – the system must be able to restore a damaged install to a known good state. Also important is total automation – configuration of things like disk and network controllers is part of spinning up a machine. (Full disclosure, I work for a server provisioning company, and this blog will originally appear on the www.stacki.com website – home of an open source provisioning tool).

Configuration Management (Application Provisioning)
Once the machine is spun up, the applications required to make it perform a specific task are required. Today this is largely a separate step from provisioning, but the market is clearly moving to a stage where the two are bundled into a single toolset. Provisioning a server without apps is half the job, and provisioning apps without a server is not possible, so it makes sense the two will slowly merge, possibly under the aegis of orchestration (below). The purpose of configuration management tools is to make certain that the applications, application pre-requisites, and configuration of both are correctly set up on target machines. Most of us know these tools better than any of the others.

Today the focus of most orchestration tools is on coordinating the deployment of complex software across the datacenter. The ability to fully deploy and configure an application no matter how many servers that application is spread across is powerful, as long as the automation gets it right, and the leaders in the space certainly do take steps to make certain the orchestration tool gets it right or reports to users what is broken in the infrastructure.

Moving forward, I see it as likely that orchestration will increasingly focus on including server provisioning so that the orchestration tool orchestrates everything from spin up to spin down. Today there is good support for virtual and cloud spin-up, but less for hardware spin-up. Watch for that to change, so where you want the orchestration tool to deploy is less important than what you want it to deploy.

Process Management
Process Management is often included in monitoring, but because it has a very focused role and does more than monitor, I’ve chosen to split it out. The point of process management is to make certain your services and apps are running. Depending upon the tool, they can also make certain the processes are responding, but that is more often done as part of overall automated monitoring. No matter how good your deployment methodologies are, a crashed process can not respond to users, so this is a pretty important bit in the automation world.

Monitoring (Logging)
Monitoring encompasses logging and watching systems for errors. While included in many of the other toolsets, a proper monitoring system will work with the overall information available across the datacenter and application to provide a more comprehensive picture of what is going on. For example, a monitoring tool can watch hardware failures, OS counters and errors, and application issues together to help trace down what actually happened from a “single pane of glass” as they say.

Knowing what broke and where, or knowing that processes are actually running do not cover the entire picture of application availability. Knowing how responsive the application is and where the bottlenecks are is every bit as important, particularly in high-volume or spike situations. That’s where metrics come in. They track historic performance and response times, and offer insight into what might be slowing down an application both in daily use and in heavy peak periods. Insight into overall app performance and specific subsystems/infrastructure helps resolve problems quickly, and even helps prevent problems with early detection. From low-level disk through API response times, measurement is the key to fine-tuning performance.

Network Automation
DevOps has a weird relationship with network automation. Most DevOps pundits simply ignore it, feeling that’s the realm of SDN. Others try to include the network but stick to only basic functionality – because that enables “real DevOps”. But reality is that for a fully automated datacenter, networking simply must be included. Without proper VLANs, DNS, DHCP, all those pretty apps are pretty useless. In fact, according to this blog on F5.com (relation disclosure, that’s my wife – but also a genius level geek), there are a whole host of network services that need to be automated to achieve an automated datacenter. So network automation is still in its relative infancy, with vendors just starting to take APIs seriously, and tools to manage cross-vendor network architectures few and far between, but it’s coming. Using network automation, a Big Data installation could be spun up with its own subnet or VLAN, complete automation of the network part adding to complete automation of server and application parts. The better vendor APIs get, the faster this can become a reality, but you can do it today, if you’re willing to put in a little work.

There are a ton of ways you can subdivide DevOps, and certainly lots of people have tried. I offer this alternative simply as a way to consider it that follows the natural path from manual to automated and then into DevOps. It allows for conversation not to be cluttered (and as I’ve mentioned before, it often is), so we can talk about what’s important, not the generic “DevOps”.

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
For better or worse, DevOps has gone mainstream. All doubt was removed when IBM and HP threw up their respective DevOps microsites. Where are we on the hype cycle? It's hard to say for sure but there's a feeling we're heading for the "Peak of Inflated Expectations." What does this mean for the enterprise? Should they avoid DevOps? Definitely not. Should they be cautious though? Absolutely. The truth is that DevOps and the enterprise are at best strange bedfellows. The movement has its roots in the tech community's elite. Open source projects and methodologies driven by the alumni of companies like Netflix, Google and Amazon. This is a great thing for the evolution of DevOps. It can be alienating for Enterprise IT though. Learning about Netflix and their simian armies, or Facebook and their mind-melting scale is fascinating. Can you take it back to the office on Monday morning though?
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examining how the Internet and the cloud has allowed for the democratization of IT, resulting in an increased demand for the cloud and the drive to develop new ways to utilize it.