@DevOpsSummit Authors: Liz McMillan, Yeshim Deniz, Zakia Bouachraoui, Pat Romanski, Elizabeth White

Related Topics: SDN Journal, Java IoT, Linux Containers, Containers Expo Blog, @CloudExpo, Cloud Security

SDN Journal: Blog Post

Network Services, Abstracted and Consumable

The network has traditionally been very static and simplistic in its offerings

Perhaps not as popular as its brothers and sisters I, P and S, Network-As-A-Service or NaaS has slowly started to appear in industry press, articles and presentations. While sometimes associated with a hypervisor based overlay solution, its definition is not very clear, which is not at all surprising. Our industry does not do too well in defining new terms. I ran across this presentation from Usenix 2012 that details a NaaS solution that adds a software forwarding engine to switches and routers that provide specific services for some well known cloud computing workloads.

I have some serious reservations about the specific implementation of the network services provided in this presentation, but the overall thoughts of specific network services delivered to applications and workloads resonates well with me. Unless this is your first visit to our blog, your reaction is probably “duh, this is what Affinity Networking is all about”. Of course it is.

The network has traditionally been very static and simplistic in its offerings. The vast majority of networks runs with an extremely small set of network services. Find me a network that uses more than some basic QoS based on queueing strategies, IP Multicast (and many understandingly avoid it as much as they can), and perhaps some VRFs and we will probably agree that that is an exception rather than a rule. And I deliberately exclude the actual underlying technologies to accomplish this, those don’t change the service, just enable it.

And it is not that networks are not capable of providing other services. Most hardware used is extremely capable of doing so much more, and in many cases even the configuration of that hardware is available. Extremely elaborate protocols exist to manage additional services, with new ones being developed constantly. And you can find paper after paper that show that specific network services can greatly improve the overall solution performance. Many of these examples are based on big data type solutions, but I am pretty sure that that translate into just about every solution that has a significant dependence on the network.

So why then do we not have a much richer set of network services available to the consumer of the networks?

There are probably multiple answers, but one that keeps bubbling to the top each time we look at this is one of abstraction. In simple terms, we have not made network services easy to create, easy to maintain, easy to debug, and most importantly, we have not made network services easy to consume. We talk about devops and the fact that the creation, debugging and maintenance of complex network services inside the core of a network is not at all trivial. Per the examples above, getting end to end QoS (consistent queuing really) in place seems like a simple task but is not. And that is technology that has been around for well over a decade. Configuring each and every switch to ensure it has the same queueing configuration and behaviors, adjust drop rates and queue lengths based on where a switch fits into the network and define what applications should fit into which queue is complex not because of the topic itself, but because of the amount of touch points, the amount of configuration steps, and the switch by switch, hop by hop mechanisms by which we deploy it. This is where devops will start.

But you also have to look at it from other side. In the first few slides of the above mentioned presentation, the presenter shows that the network engineer and the application engineer have wildly different views of the network. As they should. The application engineer should not need to know any of the ins and outs of the network and its behavior. He or she should be presented with an entity that provides connectivity, and a set of network services it offers. And it should be trivial to attach itself to any of these services without having to understand network terms. An application engineer should not need to know that DSCP bits need to be set to get a certain priority behavior. Or having to request from the network folks that a set of IP or ethernet endpoints require a lossless connectivity and must therefore be placed onto network paths that support PFC and QCN to enable RDMA over Ethernet or even FCoE.

These types of services need to become extremely easy to consume. The architect of a very large private cloud described his ideal model by which applications (and he supports thousands of them) would consume network services. He envisioned an application registration model (through a portal for instance) where application developers could express in extremely simple non network terms what their application needed. Connectivity between components X and Y. The use of specific memory systems that have been predefined to use RDMA over Ethernet (and thus require lossless connectivity). This application consists of N components that need PCI compliance and therefore need to be separated from the rest of the applications. You name it, application behavior in terms that are as far away from the actual implementation of the tools used to enable that service in the network.

There is lots of work to do on both ends of this consumable network service model. For the network engineer it needs to become much easy to enable these network services in a controllable and maintainable manner. Easy to design, easy to deploy, easy to debug and maintain. For the application engineer, it needs to become easy to consume these network services. Simple and scalable registration and request mechanisms without a lot of network terminology. My post office comparison from a few weeks ago was perhaps very simplistic, but you have to admit, using the USPS is pretty simple. You walk up to the counter, there is a menu of shipment options, each with a price and an expected result, you pick what you want, they charge you for it and off your package goes. And you don’t really worry or care too much how, just that it’s being delivered in accordance with the service you paid for….

[Today's fun fact: Stewardesses is the longest common word that is typed with only the left hand. As a result it has been banished in favor of flight attendant.]

The post Network Services, Abstracted and Consumable appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@DevOpsSummit Stories
With more than 30 Kubernetes solutions in the marketplace, it's tempting to think Kubernetes and the vendor ecosystem has solved the problem of operationalizing containers at scale or of automatically managing the elasticity of the underlying infrastructure that these solutions need to be truly scalable. Far from it. There are at least six major pain points that companies experience when they try to deploy and run Kubernetes in their complex environments. In this presentation, the speaker will detail these pain points and explain how cloud can address them.
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations might slip up with the wrong focus, how to manage change and risk in all three areas, what is possible and what is not, where to start, and especially how new structures, processes, and technologies can help drive a new DevOps culture.
When building large, cloud-based applications that operate at a high scale, it's important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. "Fly two mistakes high" is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee Atchison, Principal Cloud Architect and Advocate at New Relic, discussed how this same philosophy can be applied to highly scaled applications, and can dramatically increase your resilience to failure.
As Cybric's Chief Technology Officer, Mike D. Kail is responsible for the strategic vision and technical direction of the platform. Prior to founding Cybric, Mike was Yahoo's CIO and SVP of Infrastructure, where he led the IT and Data Center functions for the company. He has more than 24 years of IT Operations experience with a focus on highly-scalable architectures.
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regulatory scrutiny and increasing consumer lack of trust in technology in general.