Welcome!

@DevOpsSummit Authors: Zakia Bouachraoui, Yeshim Deniz, Elizabeth White, Liz McMillan, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, @CloudExpo, @DXWorldExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

DevOps, Automation, and Mid-Market Companies

When you think about the largest and most dynamic networks in the world topics like automation are a no-brainer

The overall networking landscape has been going through a fairly deliberate shift over the past couple of years. Where we used to talk CapEx, we are now talking OpEx. Where we used to talk features, we are now talking about workflows. This change in industry dialogue mirrors the rise of trends like SDN and DevOps. I have been a huge fan of automation in general and DevOps in particular for many years now. But, as an industry, are we leaving people behind unintentionally?

When you think about the largest and most dynamic networks in the world (typically characterized as either service providers or web-scale companies), topics like automation are a no-brainer. The sheer number of devices in the networks that these companies manage demands something more than keying in changes manually. And for these types of companies, the network is not just an enabler – it is a central part of their business. Without the network, there is no business. It’s not terribly surprising that these companies hire small armies of capable engineers and developers to make everything function smoothly.

In these environments, automation is not a nice-to-have. It’s closer to food and water than it is to sports and entertainment. Accordingly, their interest in technologies that support automation is high. Their capability in putting automation tools to use is high. And if their abilities do not match their requirements, they open up their wallets to make sure they get there (think: OSS/BSS).

In networking, there is a prevailing belief that what is good for these complex environments will eventually make its way into smaller, less complex networks. It might take time, but the technologies and best practices that the most advanced companies employ, will eventually trickle down to everyone else. It’s sort of the networking equivalent of Reaganomics.

But is this necessarily true?

First, let me reiterate that I am a huge advocate for automation and DevOps. But these capabilities might not be universally required. Automation is most important in environments where either the volume or rate of change is high enough to justify the effort. If the network is relatively static, changing primarily to swap out old gear for new functionally equivalent gear, it might not be necessary to automate much at all. Or if network changes are tied to incremental growth, it might not make sense to automate the very much.

Automation enthusiasts (myself included) will likely react somewhat viscerally to the idea that automation isn’t necessary. “But even in these cases, automation is useful!” Certainly, it is useful. But what if your IT team lacks the expertise to automate all the things. What then? Sure, you can change the team up, but is it worth the effort?

And even if it is worth the effort, how far along the automation path will most companies need to go? It could be that simple shell scripts are more than enough to manage the rate of change for some companies. Full-blown DevOps would be like bringing a cruise missile to a water gun fight.

In saying this, I am not trying to suggest that automation or DevOps are not important. Rather, the tools we associate with these are just that: tools. They need to be applied thoughtfully and where it makes sense. Vendors that build these tools and then to try to push them too far down into the market will find that the demand for cruise missiles drops off pretty precipitously after the top-tier companies.

Even smaller-scale infrastructure does require workflow though. The trick is in packaging the tools so that they are right-sized for the problems they are addressing.

This obviously starts with discarding the notion that workflows are common across all sizes of networks. That is simply not true. The reason that there is pushback when people say that the future of network engineering is programming is that for many people, it is not yet a foregone conclusion that full-blown automation is worth the effort.

For these people, the juice isn’t worth the squeeze.

The conclusion to draw here is not that automation is not a good thing. It’s that automation packaged as a complex DIY project isn’t always the right fit. Not everyone wants to do it themselves. At home, it turns out I am capable of repainting a room, but it just isn’t worth my time, so I hire a professional. In a network, people might be fully capable of automating policy provisioning and still find that it isn’t worth doing because policy for them just isn’t that complex.

What vendors ought to be doing is packaging their workflow optimizations in a way that is far easier to consume. Rather than building scaffolding around the network to handle management, it might make sense to make the management itself much more intuitive and more a core part of the way devices are architected.

This might sound like a brain dead statement, but consider that most networking devices are designed by people who do not run networks. And even worse, the workflows that dictate how things are used are frequently the last thing designed. If the mid-market and below are to get the advantages of the automation capabilities that the big guys are driving, vendors will need to design workflows explicitly for broad adoption.

If we really want to make the juice worth the squeeze, we need to make the squeeze a lot less painful. We need to move beyond automated networking closer to intuitive networking.

[Today’s fun fact: Lake Nicaragua boasts the only fresh water sharks in the entire world. I would be very motivated not to fall down while water skiing.]

The post DevOps, automation, and mid-market companies appeared first on Plexxi.

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@DevOpsSummit Stories
At CloudEXPO Silicon Valley, June 24-26, 2019, Digital Transformation (DX) is a major focus with expanded DevOpsSUMMIT and FinTechEXPO programs within the DXWorldEXPO agenda. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of business. Only 12% still survive. Similar percentages are found throughout enterprises of all sizes.
As you know, enterprise IT conversation over the past year have often centered upon the open-source Kubernetes container orchestration system. In fact, Kubernetes has emerged as the key technology -- and even primary platform -- of cloud migrations for a wide variety of organizations. Kubernetes is critical to forward-looking enterprises that continue to push their IT infrastructures toward maximum functionality, scalability, and flexibility.
The Kubernetes vision is to democratize the building of distributed systems. As adoption of Kubernetes increases, the project is growing in popularity; it currently has more than 1,500 contributors who have made 62,000+ commits. Kubernetes acts as a cloud orchestration layer, reducing barriers to cloud adoption and eliminating vendor lock-in for enterprises wanting to use cloud service providers. Organizations can develop and run applications on any public cloud, such as Amazon Web Services, Microsoft Azure, Red Hat OpenShift and Google Cloud Platform.
Because Linkerd is a transparent proxy that runs alongside your application, there are no code changes required. It even comes with Prometheus to store the metrics for you and pre-built Grafana dashboards to show exactly what is important for your services - success rate, latency, and throughput. In this session, we'll explain what Linkerd provides for you, demo the installation of Linkerd on Kubernetes and debug a real world problem. We will also dig into what functionality you can build on top of the tools provided by Linkerd such as alerting and autoscaling.
With container technologies widely recognized as the cloud-era standard for workload scaling and application mobility, organizations are increasingly seeking to support container-based workflows. In particular, the desire to containerize a diverse spectrum of enterprise applications has highlighted the need for reliable, container-friendly, persistent storage. However, to effectively complement today's cloud-centric container orchestration platforms, persistent storage solutions must blend reliability and scalability with a simple, cloud-native user experience. The introduction of Elastifile's CSI driver addresses these needs by augmenting containerized workflows with highly-available, scalable NFS file storage delivered via Elastifile Cloud File System...and with no complex, manual storage provisioning required.