Welcome!

@DevOpsSummit Authors: Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz, SmartBear Blog

Related Topics: @DevOpsSummit, Microservices Expo, Containers Expo Blog, @CloudExpo, FinTech Journal

@DevOpsSummit: Article

You Need #DevOps | @DevOpsSummit @DMacVittie #CD #APM #Monitoring

The problem is right in front of us, we’re confronting it every day, and yet a ton of us aren’t fixing it for our organizations

For those unfamiliar, as a developer working in marketing for an infrastructure automation company, I have tried to clarify the different versions of DevOps by capitalizing the part that benefits in a given DevOps scenario. In this case we’re talking about operations improvements. While devs – particularly those involved in automation or DevOps will find it interesting, it really talks to growing issues Operations are finding.

The problem is right in front of us, we’re confronting it every day, and yet a ton of us aren’t fixing it for our organizations, we’re merely kicking the ball down the road.

The problem? Complexity. Let’s face it, the IT world is growing more complex by the week. Sure, SaaS simplified a lot of complex apps that either weren’t central to the business we’re in or were vastly similar for the entire market, but once you get past those easy pickings, everything is getting more complex.

As I’ve mentioned in the past, we now have OpenStack on OpenStack. Yes, that is indeed a thing. But ignoring nested complexities to solve complexity issues (that is the stated purpose of OoO), rolling out an enterprise NoSQL database or even worse a Big Data installation is a complex set of multiple systems, some of which might be hosted in virtuals or the cloud, adding yet another layer of configuration complexity. The same is true for nearly every “new” development going on. Want SDN? Be prepared to install a swath of systems to support it. The list goes on and on. In fact, what started this thought for me was digging into Kubernetes. Like most geeks, I started with the getting started app – we have devolved to “try first, read later” in our industry, for good or bad. The Kubernetes Getting Started Guide is a good example of how bad our complexity has gotten. To make use of the guide you need Docker, GKE, and GCR, then you need to use bash, Node, and a command line with an array of parameters that, because you’re just getting started, you have no idea what they’re doing.

We need time to get this stuff going, and time is something that we increasingly over the last decade or so (at least) have less of. The amount and complexity of the gear Operations is overseeing has been increasing, the number of instances – be they virtual or cloud – has also, all at a faster rate than staff at most organizations. And that’s a growing problem too.

One does not simply “deploy Kubernetes” it appears. One has to work at it, like one has to struggle with Big Data installs or UCE configuration, or even in some orgs, Linux installations (which are still handled individually and done by hand in more places than makes sense to me – but I work for a company that sponsors a Linux install automation open source project, so perhaps my view is jaded by that experience).

To find the time to figure out and implement toolsets like Kubernetes and OoO, whose stated goals are to make your life easier in the long run, we need to remove the overhead of day-to-day operations. That’s where DevOPS comes in. If the man-hours to deploy a server or an app can be reduced to zero or near zero by the use of automation tools and a strong DevOps focus, then that recovered time can be reinvested in new tools to help improve operations. Yes, it’s a vicious circle, you need time to get time… But simple, easy-to-master tools can free time to tackle the more complex. Something like my employers’ Stacki project that is a simple “drop in the ISO, answer questions about the network, install, then learn a simple command line”. There are a lot of sophisticated tools out there that follow this type of install pattern and free up an impressive amount of time. Most of the application provisioning tools out there are relatively painless to set up these days (though that wasn’t always true), and can reap benefits quickly also, for example. My first run with Ansible, by way of explaining that statement, had me deploying apps in a couple of hours. While it would take longer to set it up and configure it to deploy complex datacenter apps, the fact is that most of us can find a few hours in the course of a couple weeks, particularly if we convince management of the potential benefits beforehand. As an added benefit, application provisioning tools are increasingly including network provisioning for most vendors, further reducing time spent doing manual tasks (once again, after you figure it out).

And that’s the real reason we need DevOPS. People talk about repeatability, predictability, reduced human errors… All true, but they come with their own trade-offs. The real reason is to free time so we can focus on more complex systems being rolled out and get them set without interrupting our day to do standard maintenance work that consumes an inordinate amount of time.

In the end, isn’t that what we all would love to have – the repeated steps largely automated so that we can look into new tools that improve operations or help drive the organization forward? Take some time and invest in cleaning up ops, so that you can free time to help move things forward. It’s worth the investment. In the case of servers, man-hours invested to get from nothing to hundreds of machines can be reduced from hundreds of machines * hours per machine to “Tell it about IPs and boot the machines to be configured”. That’s huge. Even if you sit and watch the installs to catch any problems, the faster server provisioning toolsets will be done with those hundreds of machines in an hour or two. Which means even after troubleshooting any problems, you’re likely to be off doing something else the next day. Not a bad ROI, if you invest the little bit of time to get started. Reinvest some of that savings in the next automation tool and compound the return… Soon you’re in nirvana, researching and implementing, while installs, reinstalls, and fixes to broken apps are handled by reviewing a report and telling the system in question (app or server provisioning systems) to fix it or install it.

It’s pretty clear that complexity will continue to increase, and tools to simplify that complexity will continue to come along. It is definitely worthwhile to invest a little time in those tools so you can invest more in those new systems.

But that’s me, I’m a fan of looking into the possible, not doing the same stuff over and over. I always assume most of IT is the same, if only they had the time. And we can have the time, so let’s do it.

More Stories By Don MacVittie

Don MacVittie is founder of Ingrained Technology, A technical advocacy and software development consultancy. He has experience in application development, architecture, infrastructure, technical writing,DevOps, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
For better or worse, DevOps has gone mainstream. All doubt was removed when IBM and HP threw up their respective DevOps microsites. Where are we on the hype cycle? It's hard to say for sure but there's a feeling we're heading for the "Peak of Inflated Expectations." What does this mean for the enterprise? Should they avoid DevOps? Definitely not. Should they be cautious though? Absolutely. The truth is that DevOps and the enterprise are at best strange bedfellows. The movement has its roots in the tech community's elite. Open source projects and methodologies driven by the alumni of companies like Netflix, Google and Amazon. This is a great thing for the evolution of DevOps. It can be alienating for Enterprise IT though. Learning about Netflix and their simian armies, or Facebook and their mind-melting scale is fascinating. Can you take it back to the office on Monday morning though?
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examining how the Internet and the cloud has allowed for the democratization of IT, resulting in an increased demand for the cloud and the drive to develop new ways to utilize it.