Welcome!

@DevOpsSummit Authors: Pat Romanski, Elizabeth White, Yeshim Deniz, Liz McMillan, Zakia Bouachraoui

Related Topics: SDN Journal, Containers Expo Blog, Agile Computing, @CloudExpo, Cloud Security, @DXWorldExpo

SDN Journal: Blog Feed Post

Square Pegs and Round Holes (Network and Applications)

One size does not fit all

Shannon Poulin, VP Data Center and Connected Systems Group and General Manager, Data Center Marketing Group for Intel gave the keynote address at Data Center World Fall on transforming the data center for a services-oriented world. Now, that was interesting enough in itself and of course it touched on SDN and cloud. But what really grabbed my attention was a focus on processor design and how decomposition of applications and the network into services and functions is changing the way processors and board-level components are designed.

You see, it turns out that one size does not fit all, and the varying resource and processing models of different types of "things" has an impact on how you put a machine together.

I mean, it's all well and good to say commoditized x86 is the future, and white box machines are going to be the basis for our virtualized data center, but the reality is that that is not a good design idea. Why? Because applications are not switches - and vice versa.

I/O versus COMPUTE
Switches are, by their nature, highly dependent on I/O. They need a lot of it. Like Gbps of it. Because what they do is push a lot of data across the network. Applications, on the other hand, need lots and lots of memory and processing power. Because what they do is mostly processing lots of user requests, each of which eats up memory. What Poulin discussed in his keynote was this diversity was not going unnoticed, and that Intel was working on processor and board design that specifically addressed the unique needs of networks and applications.

What's important for folks to recognize - and take into consideration - in the meantime is that one size does not fit all and that the pipe dream of a commoditized x86-based "fabric" of resources isn't necessarily going to work. It doesn't work as a "resource fabric" for both network and applications because the compute and network needs are vastly different for network functions and applications.

Which means no matter what you do, you can't have a homogenized resource fabric in the data center from which to provision willy nilly for both network and applications. You need specific sets of resources designated for high I/O functions and others for high processing and memory usage.

And, just to throw a wrench into the works, you've also got to consider that many "network" services aren't as "networky" as they are "application". They're in the middle, layer 4-7 services like application acceleration, load balancing, firewalling and application security. These are high I/O, yes, but they're also compute intensive, performing a variety of processing on data traversing the network.

compute versus io l47This was somewhat glossed over in Poulin's keynote. The focus on network versus compute is easier, after all, because there's a clear delineation between the two. Layer 4-7 services, though key to modern data centers, are more difficult to bucketize in terms of compute and I/O required.

Depending on what the application service (l4-7) is focused on - an application delivery firewall needs a lot of I/O to defend against network and application DDoS while a web application firewall needs lots of processing power to scan and evaluate data for threats - each of them may have different needs in terms of compute and network resources as well.

What I heard from Poulin is that Intel recognizes this, and is focusing resources on developing board level components like processors that are specifically designed to address the unique processing and network needs of both application services and network functions.

What I didn't hear was how such processors and components would address the unique needs of all the services and functions that fall in the middle, for which "general purpose" is not a good fit, but neither is a network-heavy or compute-heavy system. Indeed, the changing landscape in application architecture - the decomposition into services and an API layer over data approach - is changing applications, too. An API focused on data access is much more network (I/O) heavy than is a traditional web application.

I'm all for specialization as a means to overcome limitations inherent in general purpose compute when tasked with specialized functions, but let's not overlook that one size does not fit all. We're going to need (for the foreseeable future, anyway) pools (and/or fabrics) made of resources appropriate to the workloads for which they will be primarily responsible.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. His expertise is in automating deployment, management, and problem resolution in these environments, allowing his teams to run large transactional applications with high availability and the speed the consumer demands.
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value extensible storage infrastructure has in accelerating software development activities, improve code quality, reveal multiple deployment options through automated testing, and support continuous integration efforts. All this will be described using tools common in DevOps organizations.
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
A valuable conference experience generates new contacts, sales leads, potential strategic partners and potential investors; helps gather competitive intelligence and even provides inspiration for new products and services. Conference Guru works with conference organizers to pass great deals to great conferences, helping you discover new conferences and increase your return on investment.