Welcome!

@DevOpsSummit Authors: Destiny Bertucci, Pat Romanski, Yeshim Deniz, Dalibor Siroky, Liz McMillan

Related Topics: SDN Journal, Containers Expo Blog, Agile Computing, @CloudExpo, Cloud Security, @DXWorldExpo

SDN Journal: Blog Feed Post

Square Pegs and Round Holes (Network and Applications)

One size does not fit all

Shannon Poulin, VP Data Center and Connected Systems Group and General Manager, Data Center Marketing Group for Intel gave the keynote address at Data Center World Fall on transforming the data center for a services-oriented world. Now, that was interesting enough in itself and of course it touched on SDN and cloud. But what really grabbed my attention was a focus on processor design and how decomposition of applications and the network into services and functions is changing the way processors and board-level components are designed.

You see, it turns out that one size does not fit all, and the varying resource and processing models of different types of "things" has an impact on how you put a machine together.

I mean, it's all well and good to say commoditized x86 is the future, and white box machines are going to be the basis for our virtualized data center, but the reality is that that is not a good design idea. Why? Because applications are not switches - and vice versa.

I/O versus COMPUTE
Switches are, by their nature, highly dependent on I/O. They need a lot of it. Like Gbps of it. Because what they do is push a lot of data across the network. Applications, on the other hand, need lots and lots of memory and processing power. Because what they do is mostly processing lots of user requests, each of which eats up memory. What Poulin discussed in his keynote was this diversity was not going unnoticed, and that Intel was working on processor and board design that specifically addressed the unique needs of networks and applications.

What's important for folks to recognize - and take into consideration - in the meantime is that one size does not fit all and that the pipe dream of a commoditized x86-based "fabric" of resources isn't necessarily going to work. It doesn't work as a "resource fabric" for both network and applications because the compute and network needs are vastly different for network functions and applications.

Which means no matter what you do, you can't have a homogenized resource fabric in the data center from which to provision willy nilly for both network and applications. You need specific sets of resources designated for high I/O functions and others for high processing and memory usage.

And, just to throw a wrench into the works, you've also got to consider that many "network" services aren't as "networky" as they are "application". They're in the middle, layer 4-7 services like application acceleration, load balancing, firewalling and application security. These are high I/O, yes, but they're also compute intensive, performing a variety of processing on data traversing the network.

compute versus io l47This was somewhat glossed over in Poulin's keynote. The focus on network versus compute is easier, after all, because there's a clear delineation between the two. Layer 4-7 services, though key to modern data centers, are more difficult to bucketize in terms of compute and I/O required.

Depending on what the application service (l4-7) is focused on - an application delivery firewall needs a lot of I/O to defend against network and application DDoS while a web application firewall needs lots of processing power to scan and evaluate data for threats - each of them may have different needs in terms of compute and network resources as well.

What I heard from Poulin is that Intel recognizes this, and is focusing resources on developing board level components like processors that are specifically designed to address the unique processing and network needs of both application services and network functions.

What I didn't hear was how such processors and components would address the unique needs of all the services and functions that fall in the middle, for which "general purpose" is not a good fit, but neither is a network-heavy or compute-heavy system. Indeed, the changing landscape in application architecture - the decomposition into services and an API layer over data approach - is changing applications, too. An API focused on data access is much more network (I/O) heavy than is a traditional web application.

I'm all for specialization as a means to overcome limitations inherent in general purpose compute when tasked with specialized functions, but let's not overlook that one size does not fit all. We're going to need (for the foreseeable future, anyway) pools (and/or fabrics) made of resources appropriate to the workloads for which they will be primarily responsible.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
A strange thing is happening along the way to the Internet of Things, namely far too many devices to work with and manage. It has become clear that we'll need much higher efficiency user experiences that can allow us to more easily and scalably work with the thousands of devices that will soon be in each of our lives. Enter the conversational interface revolution, combining bots we can literally talk with, gesture to, and even direct with our thoughts, with embedded artificial intelligence, which can process our conversational commands and orchestrate the outcomes we request across our personal and professional realm of connected devices.
The need for greater agility and scalability necessitated the digital transformation in the form of following equation: monolithic to microservices to serverless architecture (FaaS). To keep up with the cut-throat competition, the organisations need to update their technology stack to make software development their differentiating factor. Thus microservices architecture emerged as a potential method to provide development teams with greater flexibility and other advantages, such as the ability to deliver applications at warp speed using infrastructure as a service (IaaS) and platform as a service (PaaS) environments.
The use of containers by developers -- and now increasingly IT operators -- has grown from infatuation to deep and abiding love. But as with any long-term affair, the honeymoon soon leads to needing to live well together ... and maybe even getting some relationship help along the way. And so it goes with container orchestration and automation solutions, which are rapidly emerging as the means to maintain the bliss between rapid container adoption and broad container use among multiple cloud hosts. This BriefingsDirect cloud services maturity discussion focuses on new ways to gain container orchestration, to better use serverless computing models, and employ inclusive management to keep the container love alive.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was joined by a panel of industry experts and real-world practitioners who shared their insight into an emerging set of best practices that lie at the heart of today's digital transformation.