Welcome!

@DevOpsSummit Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Zakia Bouachraoui, Pat Romanski

Blog Feed Post

<p>There was an article written by

There was an article written by Thomas Gryta in Monday’s Wall Street Journal discussing AT&T’s Domain 2.0 vendor list. The article is behind a pay wall, so I won’t quote liberally from it here, but the major takeaways were these:

  • AT&T is looking to cut billions in infrastructure purchases by expanding the vendors from whom they buy equipment
  • This expansion includes white box solutions, and AT&T is opening the door to smaller companies and startups the company would not have previously considered
  • SDN and NFV are reducing the reliance on underlying hardware
  • Upgrades will not require a rip and replace
  • “What used to take 18 months should take minutes” — John Donovan, AT&T

First, we should be fairly careful before drawing a ton of conclusions about vendor revenues based on this. AT&T will still deploy the likes of Cisco and Juniper en masse. Second, the Domain 2.0 project will not change major buying patterns for some time. So while there was a downward reaction in the stock market, the actual financial impact will be unknown for a few years.

That said, the announcement is significant on a couple of levels. AT&T’s endorsement of some of the major networking technology trends likely bolsters the case for the eventual emergence of these technologies. It also sets a time horizon (5 years, per the article) over which we should start to deployments. This likely serves as the outer bound for carrier adoption.

But more than the long-term vendor implications, what are the drivers in the industry that lead to this type of shift?

AT&T is looking to cut billions

It has been well-documented how expensive networking gear is. On the carrier side, when you exclude Huawei (because of DoD concerns), there are really only a small number of vendors in the space. With so few competitors, there is not a lot of downward pressure on price. The result is that carriers have been forced to pick and choose their equipment from a menu of high-priced options.

So long as demand for new capacity did not outpace budgets, this was a tenable (though not desirable) situation. But traffic continues to grow at a geometric rate, which means that at some point in the not-so-distant future, the cost and revenue lines were going to cross, which would ultimately make the business not viable. AT&T is reacting to this now in the hopes of not only keeping those lines from crossing but also with the intent of widening the gap between them (read: increasing profits).

AT&T is experiencing what a lot of infrastructure owners are experiencing: namely, the Year One problem of finding the dollars to keep up with capacity growth is increasingly difficult. When your deployments are expansive, you have to not only add the requisite new capacity but also refresh devices that are perpetually reaching their useful end of life. The result is an annual CapEx spend that is not sustainable.

White boxes, smaller companies, and startups

Make no mistake about it: the number one downward force on price is competition. The popular school of thought here is that commodity means cheap. But while there is correlation, there is not definite causation between commodity and price. I have written before about the profit margins on bottled water. Water remains one of the most commoditized products available, and yet water companies are making upwards of around 200% margins.

The real source of pricing relief is competition. And AT&T is very predictably opening up their network to a host of new combatants. Note that they are opening the door to these players, not guaranteeing that they will win. AT&T is setting up their own Thunderdome and allowing the vendors to do whatever they will to compete for their rather substantial business.

As this competition heats up, the incumbents will absolutely tout their support and services organizations. They really are the biggest differentiators once the architectural playing field is leveled. It will be interesting to see how AT&T handles this. The larger companies have larger portfolios with bloated software codebases. Put differently, they require more support. Will AT&T engage with smaller companies that have more narrow product focus that requires a lower support footprint?

SDN and NFV; no more rip and replace

The article suggests that these technologies are reducing the dependency on the underlying hardware. While I understand the spirit of the comment, I actually think this is somewhat incorrect. The reality now is that the vast majority of networking features in the big incumbents are delivered in software already. In many cases (routing protocols, for example), the dependence on underlying hardware is near zero already. Juniper, for instance, was successful in extending routing protocols to new platforms largely because of the platform-independence within that part of the software.

The real issue here is that the software and the hardware are inseparable. The meaningful point is not whether the changes are made in the hardware but rather what is required to push those changes into the network. What SDN and NFV do is allow a layer of functionality to be built on top of the existing network (typically using a controller like OpenDaylight or NSX as a platform). This provides a new path for the introduction of new capabilities, and one that does not require as frequent upgrades of the underlying hardware.

It also changes the maintenance and failure domains in a significant way. Not only can features be added separately but they can be upgraded with less risk to subscribers. Managing within AT&T’s billing and customer service constraints should not be overlooked.

What used to take 18 months

This is an obvious nod to the workflow issues that plague any large network operator, particularly those with sprawling networks that require extensive OSS/BSS deployments. Managing thousands of devices through pinpoint control over static configuration is tedious at best. Trying to reconcile edge policy across multiple network domains managed by different teams all in support of any kind of seamless service delivery requires the kind of manual organizational orchestration that would make grown men cry.

AT&T is embracing SDN to help clean up this mess. The unstated but significant point here is that cost cutting does not end with controlling the capital outlays. Longer term, operational cost must be addressed. What AT&T is signaling here is that they are interested in more than the Year One CapEx problem; they have their eyes simultaneously on the Year Three OpEx problem.

What next?

AT&T has very cleverly put everyone on notice. SDN represents more than a new technology; it is a new architecture. And a move to a new architecture means that the reliance on decades of esoteric, niche networking features is going away. This levels the playing field, which stimulates competition, and that will give AT&T a path to the cost cutting (both CapEx and OpEx) that they so crave.

What AT&T is less clear about is how they will get from here to there. The transition to a new architecture is a lot like getting in shape. You don’t drop 40 pounds by running on the treadmill for 22 hours. You do it by running for 45 minutes a day. Similarly, AT&T (and any other company looking to take advantage of the changes in technology) will need to commit dutifully to making the shift. This means changing how they look at gear, who they talk to, how they purchase, and how they deploy. This change will be as much organizational as it is technological.

And those vendors who understand that they are changing how AT&T thinks about business as much as how they manage a network will be in the best position to capitalize.

[Today’s fun fact: It is illegal to shave while driving a car in Massachusetts. For DevOps engineers, it is unconscionable to shave ever.]

The post appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@DevOpsSummit Stories
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
DXWorldEXPO LLC announced today that Telecom Reseller has been named "Media Sponsor" of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. Telecom Reseller reports on Unified Communications, UCaaS, BPaaS for enterprise and SMBs. They report extensively on both customer premises based solutions such as IP-PBX as well as cloud based and hosted platforms.
"We host and fully manage cloud data services, whether we store, the data, move the data, or run analytics on the data," stated Kamal Shannak, Senior Development Manager, Cloud Data Services, IBM, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure using the Kublr platform, and how Kubernetes objects, such as persistent volumes, ingress rules, and services, can be used to abstract from the infrastructure.