Welcome!

@DevOpsSummit Authors: Elizabeth White, Zakia Bouachraoui, Pat Romanski, Liz McMillan, AppDynamics Blog

Related Topics: @DevOpsSummit, Containers Expo Blog, Cloud Security

@DevOpsSummit: Article

Clash of Ops | @DevOpsSummit #BigData #APM #DevOps #Docker #Monitoring

DevOps and netops speak different languages that use the same word to mean different things

It was a Monday. I was reading the Internet. Okay, I was skimming feeds. Anyway, I happened across a title that intrigued me, "Stateful Apps and Containers: Squaring the Circle." It had all the right buzzwords (containers) and mentioned state, a topic near and dear to this application networking-oriented gal, so I happily clicked on through.

Turns out that Stateful Apps are not Stateful Apps. Seriously.

To be fair, I should really say that when a devops guy talks about ‘stateful apps' it is not the same thing as when a netops gal uses the term ‘stateful apps.'  That's because the devops guy is referring to persistent data storage for applications. File systems, databases, etc... When a netops gal talks about stateful apps, they're talking about the unique characteristics that identify existing TCP connections between two systems, like a client and an app. Devops thinks in terms of app data, netops about network data.

Devops and netops speak different languages that use the same word to mean different things. It's like English. No big deal.

The thing is that this may seem like a minor issue to be worried about. But then I got thinking about emerging application architectures like microservices and the dominance of APIs and the urgency with which everyone is moving to secure HTTP traffic. And I realized that actually, it is a pretty big deal, because it's a clash of ops. While devops is over there, building stateless architectures based on the newest theories and principles of scalability, we're requiring security that basically negates many of the benefits we might have seen.

That's because the nature of public key cryptography requires state in the network.

Here Comes the (Computer) Science
Public key infrastructure (PKI) is based on a fairly simple premise that information is exchanged between two endpoints (client and app) that is unique to that connection. That means any subsequent exchanges have to be made between the two endpoints that established that connection.

That's stateful networking.

Even if your entire architecture is based on stateless microservices, once you add security (SSL/TLS), it's stateful. Whamo! Just like that. And that impacts scale. Because now you've got to figure out how best to distribute traffic based on how loaded any given instance of that app might be.

And you probably don't want to be renegotiating secure sessions for every, single, interaction. You don't. I don't care how much faster HTTP/2 is, or how much better ECC is over previous generations of cryptography (spoiler: quite a bit better), there is still significant latency by the process of negotiating that connection. There's the overhead of establishing the underlying TCP session and then the security negotiations. That adds latency thanks to all those round trips back and forth, which means slower application response times. Especially on mobile devices.

So what? You might say. It's measured in milliseconds, that can't possibly impact the application experience.

But it does. Milliseconds matter, especially today, when digital natives who've never experienced what 2800bps feels like want their apps to respond instantaneously, with LAN-like performance.

What that means is that adding that layer of security (which is - or should be - a requirement) effectively turns your elegant, stateless architecture into a stateful one.

This is why architecture matters. Because it's no longer a matter of throwing a load balancer in front of those services and picking an algorithm, it's about extending the app architecture into the network, upstream, and understanding the advantages of terminating that security before it gets all that "state" in your "stateless" architecture. If the load balancer (or ADC if you prefer) is terminating SSL/TLS, then it has to manage the negotiation, and the back and forth with clients. That means it's free (if it's a modern proxy-based solution) to interact in with services in the back-end the way dev intended: statelessly.

stateful-stateless-arch

The thing to be aware of is that when app architectures and network architectures meet, they can often clash and effectively negate all the goodness intended by the new app architecture in the first place. DevOps is as much about communication between groups as it is automating the processes between them. That means understanding the impact of the network on apps, and vice versa and agreeing on an architecture that preserves the best characteristics of the app architecture without sacrificing network speed or security.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
Today most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes significant work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost and increase in speed. Sometimes in order to reduce complexity teams compromise features or change requirements
GCP Marketplace is based on a multi-cloud and hybrid-first philosophy, focused on giving Google Cloud partners and enterprise customers flexibility without lock-in. It also helps customers innovate by easily adopting new technologies from ISV partners, such as commercial Kubernetes applications, and allows companies to oversee the full lifecycle of a solution, from discovery through management.
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rather than mimicking legacy server virtualization workflows and architectures.
Using serverless computing has a number of obvious benefits over traditional application infrastructure - you pay only for what you use, scale up or down immediately to match supply with demand, and avoid operating any server infrastructure at all. However, implementing maintainable and scalable applications using serverless computing services like AWS Lambda poses a number of challenges. The absence of long-lived, user-managed servers means that states cannot be maintained by the service. Longer function invocation times (referred to as cold starts) become very important to track, because they impact the response time of the service and will impose additional cost. Additionally, the transition to smaller individual components (much like breaking a monolithic application into microservices) results in a simpler deployment model, but makes the system as a whole increasingly complex.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers with heavy investments in serverless computing, when most of the industry has its eyes on Docker and containers.