@DevOpsSummit Authors: Pat Romanski, Zakia Bouachraoui, Elizabeth White, Yeshim Deniz, Liz McMillan

Blog Feed Post

<p>There was an article written by

There was an article written by Thomas Gryta in Monday’s Wall Street Journal discussing AT&T’s Domain 2.0 vendor list. The article is behind a pay wall, so I won’t quote liberally from it here, but the major takeaways were these:

  • AT&T is looking to cut billions in infrastructure purchases by expanding the vendors from whom they buy equipment
  • This expansion includes white box solutions, and AT&T is opening the door to smaller companies and startups the company would not have previously considered
  • SDN and NFV are reducing the reliance on underlying hardware
  • Upgrades will not require a rip and replace
  • “What used to take 18 months should take minutes” — John Donovan, AT&T

First, we should be fairly careful before drawing a ton of conclusions about vendor revenues based on this. AT&T will still deploy the likes of Cisco and Juniper en masse. Second, the Domain 2.0 project will not change major buying patterns for some time. So while there was a downward reaction in the stock market, the actual financial impact will be unknown for a few years.

That said, the announcement is significant on a couple of levels. AT&T’s endorsement of some of the major networking technology trends likely bolsters the case for the eventual emergence of these technologies. It also sets a time horizon (5 years, per the article) over which we should start to deployments. This likely serves as the outer bound for carrier adoption.

But more than the long-term vendor implications, what are the drivers in the industry that lead to this type of shift?

AT&T is looking to cut billions

It has been well-documented how expensive networking gear is. On the carrier side, when you exclude Huawei (because of DoD concerns), there are really only a small number of vendors in the space. With so few competitors, there is not a lot of downward pressure on price. The result is that carriers have been forced to pick and choose their equipment from a menu of high-priced options.

So long as demand for new capacity did not outpace budgets, this was a tenable (though not desirable) situation. But traffic continues to grow at a geometric rate, which means that at some point in the not-so-distant future, the cost and revenue lines were going to cross, which would ultimately make the business not viable. AT&T is reacting to this now in the hopes of not only keeping those lines from crossing but also with the intent of widening the gap between them (read: increasing profits).

AT&T is experiencing what a lot of infrastructure owners are experiencing: namely, the Year One problem of finding the dollars to keep up with capacity growth is increasingly difficult. When your deployments are expansive, you have to not only add the requisite new capacity but also refresh devices that are perpetually reaching their useful end of life. The result is an annual CapEx spend that is not sustainable.

White boxes, smaller companies, and startups

Make no mistake about it: the number one downward force on price is competition. The popular school of thought here is that commodity means cheap. But while there is correlation, there is not definite causation between commodity and price. I have written before about the profit margins on bottled water. Water remains one of the most commoditized products available, and yet water companies are making upwards of around 200% margins.

The real source of pricing relief is competition. And AT&T is very predictably opening up their network to a host of new combatants. Note that they are opening the door to these players, not guaranteeing that they will win. AT&T is setting up their own Thunderdome and allowing the vendors to do whatever they will to compete for their rather substantial business.

As this competition heats up, the incumbents will absolutely tout their support and services organizations. They really are the biggest differentiators once the architectural playing field is leveled. It will be interesting to see how AT&T handles this. The larger companies have larger portfolios with bloated software codebases. Put differently, they require more support. Will AT&T engage with smaller companies that have more narrow product focus that requires a lower support footprint?

SDN and NFV; no more rip and replace

The article suggests that these technologies are reducing the dependency on the underlying hardware. While I understand the spirit of the comment, I actually think this is somewhat incorrect. The reality now is that the vast majority of networking features in the big incumbents are delivered in software already. In many cases (routing protocols, for example), the dependence on underlying hardware is near zero already. Juniper, for instance, was successful in extending routing protocols to new platforms largely because of the platform-independence within that part of the software.

The real issue here is that the software and the hardware are inseparable. The meaningful point is not whether the changes are made in the hardware but rather what is required to push those changes into the network. What SDN and NFV do is allow a layer of functionality to be built on top of the existing network (typically using a controller like OpenDaylight or NSX as a platform). This provides a new path for the introduction of new capabilities, and one that does not require as frequent upgrades of the underlying hardware.

It also changes the maintenance and failure domains in a significant way. Not only can features be added separately but they can be upgraded with less risk to subscribers. Managing within AT&T’s billing and customer service constraints should not be overlooked.

What used to take 18 months

This is an obvious nod to the workflow issues that plague any large network operator, particularly those with sprawling networks that require extensive OSS/BSS deployments. Managing thousands of devices through pinpoint control over static configuration is tedious at best. Trying to reconcile edge policy across multiple network domains managed by different teams all in support of any kind of seamless service delivery requires the kind of manual organizational orchestration that would make grown men cry.

AT&T is embracing SDN to help clean up this mess. The unstated but significant point here is that cost cutting does not end with controlling the capital outlays. Longer term, operational cost must be addressed. What AT&T is signaling here is that they are interested in more than the Year One CapEx problem; they have their eyes simultaneously on the Year Three OpEx problem.

What next?

AT&T has very cleverly put everyone on notice. SDN represents more than a new technology; it is a new architecture. And a move to a new architecture means that the reliance on decades of esoteric, niche networking features is going away. This levels the playing field, which stimulates competition, and that will give AT&T a path to the cost cutting (both CapEx and OpEx) that they so crave.

What AT&T is less clear about is how they will get from here to there. The transition to a new architecture is a lot like getting in shape. You don’t drop 40 pounds by running on the treadmill for 22 hours. You do it by running for 45 minutes a day. Similarly, AT&T (and any other company looking to take advantage of the changes in technology) will need to commit dutifully to making the shift. This means changing how they look at gear, who they talk to, how they purchase, and how they deploy. This change will be as much organizational as it is technological.

And those vendors who understand that they are changing how AT&T thinks about business as much as how they manage a network will be in the best position to capitalize.

[Today’s fun fact: It is illegal to shave while driving a car in Massachusetts. For DevOps engineers, it is unconscionable to shave ever.]

The post appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@DevOpsSummit Stories
You want to start your DevOps journey but where do you begin? Do you say DevOps loudly 5 times while looking in the mirror and it suddenly appears? Do you hire someone? Do you upskill your existing team? Here are some tips to help support your DevOps transformation. Conor Delanbanque has been involved with building & scaling teams in the DevOps space globally. He is the Head of DevOps Practice at MThree Consulting, a global technology consultancy. Conor founded the Future of DevOps Thought Leaders Debate. He regularly supports and sponsors Meetup groups such as DevOpsNYC and DockerNYC.
The DevOps dream promises faster software releases while fostering collaborating and improving quality and customer experience. Docker provides the key capabilities to empower DevOps initiatives. This talk will demonstrate practical tips for using Atlassian tools like Trello, Bitbucket Pipelines and Hipchat to achieve continuous delivery of Docker based containerized applications. We will also look at how ChatOps enables conversation driven collaboration and automation for self provisioning cloud and container infrastructure.
Charles Araujo is an industry analyst, internationally recognized authority on the Digital Enterprise and author of The Quantum Age of IT: Why Everything You Know About IT is About to Change. As Principal Analyst with Intellyx, he writes, speaks and advises organizations on how to navigate through this time of disruption. He is also the founder of The Institute for Digital Transformation and a sought after keynote speaker. He has been a regular contributor to both InformationWeek and CIO Insight and has been quoted or published in Time, CIO, Computerworld, USA Today and Forbes.
Andi Mann, Chief Technology Advocate at Splunk, is an accomplished digital business executive with extensive global expertise as a strategist, technologist, innovator, marketer, and communicator. For over 30 years across five continents, he has built success with Fortune 500 corporations, vendors, governments, and as a leading research analyst and consultant.
The current environment of Continuous Disruption requires companies to transform how they work and how they engineer their products. Transformations are notoriously hard to execute, yet many companies have succeeded. What can we learn from them? Can we produce a blueprint for a transformation? This presentation will cover several distinct approaches that companies take to achieve transformation. Each approach utilizes different levers and comes with its own advantages, tradeoffs, costs, risks, and outcomes.