Welcome!

DevOps Journal Authors: Pat Romanski, Elizabeth White, Roger Strukhoff, Yeshim Deniz, Lori MacVittie

Blog Feed Post

Unused networking capabilities are useless networking capabilities

Stop me if you have heard this before: “We have had the ability to do that for 10 years. It’s called MPLS.”

For whatever reason, the networking industry seems more open to innovation these days than for much of the past 15 years. We see the rise of important technologies like SDN, NFV, network virtualization, DevOps, and photonic switching. Every new technology threatens to disrupt some existing technology. And along with it, whatever business or personal interests have accumulated alongside.

Take SDN for example. A centralized control plane does a couple of things. At its most basic, it makes things like edge policy provisioning more straightforward. Taken a bit further, it provides a point from which the network can be viewed as a single resource, which makes things like monitoring meaningfully different. Beyond that, a global network view allows for intelligent allocation of network resources.

Some networking diehards scoff at the notion that SDN is innovative. They proclaim proudly, “We have had policy controllers for ages!” They point to OSS/BSS systems and declare, “Monitoring has been done for decades. How do you think service providers stay in business?” If you mention anything about intelligent network pathing, they might ask, “Have you not heard of BGP or MPLS?”

First, let me concede that most of what SDN (and the other disruptive technologies, by the way) is trying to do has been done before. But that doesn’t mean that we should all become pointy-headed academics who say things like “Everything that is old is new again.” Doing so would ignore the very nature of change.

It is true that almost everything that is invented is a derivative piece of work. There have been a number of pieces written about the myth of the a-ha moment. Most epiphanies are the result of dutiful experimentation that yields a meaningful conclusion. That there is a conclusion provides an a-ha moment, but the experimentation that precedes it is where all the work takes place.

As we look at SDN, simply discarding it as a reimagining of things we have already done is ignoring the value of years of real-world experimentation with networking technologies. A more meaningful response is to ask: what have we learned through the years?

What we should take away is that the presence of advanced technologies is not sufficient. While it might be true that MPLS or BGP extensions are enough to make the network do whatever people want it to do, at what point do the diehards relent and ask the question: if the answers are so clear, why do 99% of networks not use these tools?

It is tempting to blame the users. There is this rather condescending viewpoint in some circles that people who manage some of the smaller or less sophisticated networks are somehow incapable. But riddle me this Batman: isn’t the measure of technology greatness at least somewhat dependent on how easy it is to use?

Said another way, there are really two sides of every technology: what it does, and how it plugs into what people do. No matter how elegant the solution, if it goes unused, it is in fact useless. As an industry, we have driven networking technologies forward paying careful attention to only half of this equation. In doing so, we have created the kind of inequality that we see in other aspects of society. We have in fact left behind networking’s 99%.

The question we need to ask ourselves is whether this is the right path forward. Does the fact that most people do not deploy sophisticated MPLS networks mean that the average network simply shouldn’t get access to the benefits of a well-traffic-engineered environment? Or does the absence of deep programming expertise mean that network operators shouldn’t enjoy a workflow-optimized experience?

The answer here has to be an emphatic no. We simply have to make some of these benefits more accessible to networks that extend beyond the major service providers, web-scale companies, and Fortune 100. Collectively, we need to be looking beyond just the capability. We need to consider how those capabilities are used in context. And then we need to make the associated workflow (everything from provisioning to validation to troubleshooting) much simpler to use.

If, when we are done, our networks are still unwieldy and fragile, then we need to keep working. The “problem” is not the people managing networks. The customer is never the problem. The real problem is that the networking industry has built half of some of the most powerful stuff imaginable. We need to build the other half.

[Today’s fun fact: The world’s oldest piece of chewing gum is over 9,000 years old. I think I sat at the restaurant table it is stuck to once.]

The post Unused networking capabilities are useless networking capabilities appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

Latest Stories from DevOps Journal
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With “smart” appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user’s habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can’t be addressed without the kinds of agile software development and infrastructure approaches pioneered by the DevOps movement.
WaveMaker on Tuesday announced WaveMaker Enterprise, licensed software that enables organizations to run their own end-to-end application platform as a service (aPaaS) for building and running custom apps. WaveMaker Enterprise is a commercially available rapid API app development and deployment (RAADD) software integrated with a Docker container-architected aPaaS. WaveMaker Enterprise adds middleware and its Docker-architected PaaS to extend WaveMaker Studio, the company's free open source development platform, which has garnered over two million downloads and 30,000 loyal users around the world.
WaveMaker CEO Samir Ghosh is taking a new pass at aPaas, and leveraging the increasingly popular Docker open-source platform, with the announcement of WaveMaker Enterprise. The new version of the company's eponymous software “enables instant, end-to-end custom web app creation and management by professional and non-professional developers (alike) and development teams,” according to the company. We asked Samir a few questions about this, and here's what he had to say: Cloud Computing Journal: You've mentioned the previous challenge of business-side developers making that jump from design to deployment. What sort of learning curve will they still face with Wavemaker Enterprise? Samir Ghosh: “Business-side developers” can include non-programming business users or professional developers under tight schedules or with limited mobile or front-end programming expertise. Both can use WaveMaker to meet their app development needs, but may have different deployment needs. I think business users just want their app to run as easily as possible. In WaveMaker, they can literally click a button and their application will run, either on our public cloud or on the enterprise’s private...
Yahoo CIO Mike D. Kail will present a session on DevOps at the 3rd International DevOps Summit, November 4-6, 2014, at the Santa Clara Convention Center in Santa Clara, CA. Mike brings more than 23 years of IT operations experience with a focus on highly scalable architectures to Yahoo. Prior to Yahoo, he served as VP of IT Operations at Netflix. The Netflix culture highlighted the transformation we see within forward-thinking IT organizations today and its use of public cloud and ‘No Ops' is well known in the industry. Mike Kail worked to develop this culture within Netflix's own IT organization, where he focused not only on the technology, but also on hiring and training the right talent. In order to achieve the right mix of technology innovation and human talent, he concentrated on identifying the right mind set for a new way of IT (DevOps) and how to transition from IT Ops to DevOps
DevOps Summit at Cloud Expo Silicon Valley announced today a limited time free "Expo Plus" registration option. On site registration price of $1,95 will be set to 'free' for delegates who register during this offer perios. To take advantage of this opportunity, attendees can use the coupon code, and secure their registration to attend all keynotes, DevOps Summit sessions at Cloud Expo, expo floor, and SYS-CON.tv power panels. Registration page is located at the DevOps Summit site.
The industry is heated with debates on whether adopting private or public cloud is the smartest, best, cheapest, you name it choice. But this debate is missing the mark. Businesses shouldn’t be discussing public vs. private, but rather how can they make the two work together to their greatest advantage. The ideal is to merge on-premise and off-premise into a seamless environment that can be managed as a single entity – a forward-looking stance that will eventually see major adoption. But as of late 2013, hybrid cloud was still “rare,” noted Gartner analyst Tom Bittman. In his session at 15th Cloud Expo, Marten Mickos, CEO of Eucalyptus Systems, will discuss how public clouds need on-premise satellites to win and, conversely, how on-premise environments cannot be really powerful unless they are connected to the public cloud. It’s not two competing worlds; it’s two dimensions of the same world.
All too many discussions about DevOps conclude that the solution is an all-purpose player: developer and operations guru, complete with pager for round-the-clock duty. For most organizations that is not the way forward. In his session at DevOps Summit, Bart Copeland, President & CEO of ActiveState Software, will discuss how to achieve the agility and speed of end-to-end automation without requiring an organization stocked with Supermen and Superwomen.
The impact of DevOps in the cloud era is potentially profound. DevOps helps businesses deliver new features continuously, reduce cycle time and achieve sustained innovation by applying agile and lean principles to assist all stakeholders in an organization that develop, operate, or benefit from the business’ lifecycle. In his session at DevOps Summit, Prashanth Chandrasekar, General Manager at Rackspace, will exam whether / how companies can work with external DevOps specialists to achieve "DevOps elasticity" and DevOps expertise at scale while internally focusing on writing code / development.
In his @ThingsExpo presentation, Aaater Suleman will discuss DevOps, Linux containers, Docker in developing a complex Internet of Things application. The goal of any DevOps solution is to optimize multiple processes in an organization. And success does not necessarily require that in executing the strategy everything needs to be automated to produce an effective plan. Yet, it is important that processes are put in place to handle a necessary list of items. Docker provides a user-friendly layer on top of Linux Containers (LXCs). LXCs provide operating-system-level virtualization by limiting a process's resources. In addition to using the chroot command to change accessible directories for a given process, Docker effectively provides isolation of one group of processes from other files and system processes without the expense of running another operating system.
In his session at DevOps Summit, Andrei Yurkevich, CTO at Altoros, will provide an overview of all the benefits and opportunities, as well as drawbacks of deploying Cloud Foundry PaaS with Juju and will compare it to BOSH. Attendees will discover the features that overlap, and will learn to understand what Juju Charm is, what it is not, where you use one or the other or where you use both BOSH and Juju Charms together.