Click here to close now.




















Welcome!

@DevOpsSummit Authors: Elizabeth White, Liz McMillan, Lori MacVittie, Don MacVittie, Tim Hinds

Related Topics: SDN Journal, Microservices Expo, Containers Expo Blog, @CloudExpo, @BigDataExpo, @DevOpsSummit

SDN Journal: Blog Feed Post

What Networking Can Learn from the NFL

It is the NFL’s overall position on its own evolution that has secured its place at the top of the entertainment pantheon

We are a few short days away from the biggest spectacle in sports – the Super Bowl. It is impossible to avoid talk this week of Peyton Manning, the Denver Broncos, the Seattle Seahawks, and the NFL in general. But does the NFL have anything to teach tech industries?

The NFL is a massively successful franchise by almost any measure. Despite a rash of recent scandals including a pay-for-injury bounty program and a major law suit and settlement tied to concussions, the league continues to grow its fan base – both in the US and abroad – while raking in record numbers of viewers and revenue. At the heart of the NFL’s resilience when it comes to scandal and its seemingly bottomless pit of revenues is an uncanny to reinvent itself.

In fact, it is the NFL’s overall position on its own evolution that has secured its place at the top of the entertainment pantheon.

Instant Replay in the NFL
The NFL adopted instant replay in 1986 after a fairly healthy debate. Detractors would point out that part of the history of the NFL was that the game was officiated by humans, complete with their flaws. Games had been decided in the past by a team of officials who had to get it right in the moment, and changing that would somehow alter the NFL’s traditions. But it took only a few high-profile officiating mishaps played back on national television to sway sentiment, and in 1986, by a vote of 23 to 4 (with one abstaining), the NFL ushered instant replay into the league.

But instant replay’s first stint in the NFL lasted only until 1992. In its first incarnation, instant replay ranged from effective to wildly unpopular. The rules for which plays could be reviewed was not always clear. The process was slow and at times awkward, making games take too long. And the original incarnation of instant replay allowed officials to review their own calls, which led to somewhat maddening outcomes.

Instant replay went dark until making its triumphant return in 1999. With a few process tweaks (coaches being able to challenge specific calls) and the advance of technology (HD and more angles), the system is clearly here to stay.

But what is so important about how the NFL rolled out instant replay? And how does this apply to networking?

Instant Replay and Networking
First, it is worth noting that instant replay was not a unanimous choice. There were detractors – members of the Old Guard who thought that the new way of doing business was too big a departure from the past. In networking, we face much of the same. There are countless people who fight change at every step because it is not consistent with the old way of doing things. They cling to their technological religion while the rest of the world moves forward. It’s not that their experiences are not not relevant or even not important, but their inability to work alongside the disruptors means that those experiences are kept private, forcing the New Guard to stumble over many of the same obstacles. This is not good for anyone.

Second, we should all realize that instant replay was tried and it failed. But despite the failure, the NFL was able to bring it back to the great benefit of the game. As the SDN revolution wages on, there are people who point to the past. They say clever things like “All that is old is new again” or they refer derisively to past attempts the industry has made to solve some of the same problems being addressed by SDN today.

But if ideas were permanently shelved because of setback or failure, where would we be? Using the past as a compass for the future is helpful; clinging to the past and using it to justify a refusal to move forward is destructive.

And finally, the NFL has shown a remarkable ability to iterate on its ideas. Instant replay was successful in its second run because of the changes the NFL made. New technology will not be invented with perfect foresight. The initial ideas might not even be as important as the iterative adjustments. We need to embrace failure and use it to adapt and overcome. By not being religious about its history, the NFL has successfully evolved. The question for networking specialists everywhere is to what extent our own industry is capable of setting aside its sacred cows.

Rushing, West Coast Offense, Hurry-Up Offense
Football is remarkable in how much it changes over time. Decades ago, offense was all about having a good running back. The passing game was an afterthought, used to lure defenders away from the line of scrimmage. Those days yielded to a more pass-happy time featuring the San Diego Chargers’ Air Coryell offense and the Houston Oilers Run and Shoot. Those teams handed the offensive mantle over to Bill Walsh’s West Coast Offense. Then we saw New Orleans’ more vertical passing attack. And now we have the whole hurry-up offense.

It almost doesn’t matter what is different between these systems. That so many systems have been able to thrive is what is amazing. The NFL, despite its traditions, seems most committed to reinventing itself. And for every one of these offensive systems, there are a dozen others that failed to catch on.

Evolution and Networking
The NFL has figured out that they are a league that thrives on new ideas. Whether its the NFL as a whole, or individual teams and players, the entire league is committed to trying new things. That commitment has created a hyper-fertile breeding ground for new ideas. It is no surprise that the league has managed to reinvent itself every few years, much to the delight of its legions of fans.

Networking is going through an interesting time. This period of 3-4 years might very well be looked on as a Golden Era for networking. The amount of new ideas that are being tested in the marketplace right now is amazing. SDN, NFV, DevOps, Photonic Switching, Sensor Networking, Network Virtualization… and the list goes on. But these new ideas came on the heels of what really were the Dark Ages. After the Dot.com bust, the networking world went dark. Sure, there were new knobs and doodads that were useful for folks, but as an industry, the innovation was pretty incremental.

So when this Golden Era of Networking is over, which networking industry will we have? Will we return to the Dark Ages, or will we end up in another Period of Enlightenment? If the NFL is any indication of what continuous innovation looks like, it would seem the better answer is to embrace the new ideas. But are we culturally prepared to continue embracing disruption? Are we collectively unafraid of failure enough that this type of future suits us? If you ask me, we have to be.

Defense Wins Championships
There is an old saw that goes “Defense wins championships.” At this time of year, it gets trotted out as one of those universal truths. But here’s the reality: evolution wins championships. In the NFL, offenses and defenses win about the same amount (a slight nod to defenses, but only by a hair). It’s a team’s ability to evolve over the years – and even during the game – that dictates success.

Our industry is no different. We have our own Old Guard that talks about past technologies with the kind of reverence that you see when historians put on their smoking jackets and grab their pipes. But our industry is defined by its future more than its past. There is a lot to learn from our history, but if we let those teachings get in the way of our future, we will be no better off than we are now.

So when you are grabbing a beer or diving into that 7-layer dip at whatever Super Bowl party you end up at, talk about the role of innovation and how it reigns supreme over those dusty old defenses.

[Today's fun fact: Clans of long ago that wanted to get rid of their unwanted people without killing them used to burn their houses down, hence the expression "To get fired." I wonder where the term "lay off" came from then?]

The post What Networking Can Learn From the NFL appeared first on Plexxi.

Read the original blog entry...

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@DevOpsSummit Stories
Between the compelling mockups and specs produced by your analysts and designers, and the resulting application built by your developers, there is a gulf where projects fail, costs spiral out of control, and applications fall short of requirements. In his session at @DevOpsSummit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, presented a new approach where business and development users collaborate – each using tools appropriate to their goals and expertise – to build mockups and enhance them all the way through functional prototypes, to final working applications. Learn ...
Graylog, Inc., has added the capability to collect, centralize and analyze application container logs from within Docker. The Graylog logging driver for Docker addresses the challenges of extracting intelligence from within Docker containers, where most workloads are dynamic and log data is not persisted or stored. Using Graylog, DevOps and IT Ops teams can pinpoint the root cause of problems to deliver new applications faster and minimize downtime.
Mobile, social, Big Data, and cloud have fundamentally changed the way we live. “Anytime, anywhere” access to data and information is no longer a luxury; it’s a requirement, in both our personal and professional lives. For IT organizations, this means pressure has never been greater to deliver meaningful services to the business and customers.
In a recent research, analyst firm IDC found that the average cost of a critical application failure is $500,000 to $1 million per hour and the average total cost of unplanned application downtime is $1.25 billion to $2.5 billion per year for Fortune 1000 companies. In addition to the findings on the cost of the downtime, the research also highlighted best practices for development, testing, application support, infrastructure, and operations teams.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usage of their services for licensing and billing purposes? In his session at 16th Cloud Expo, Delano ...
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
Palerra, the cloud security automation company, announced enhanced support for Amazon AWS, allowing IT security and DevOps teams to automate activity and configuration monitoring, anomaly detection, and orchestrated remediation, thereby meeting compliance mandates within complex infrastructure deployments. "Monitoring and threat detection for AWS is a non-trivial task. While Amazon's flexible environment facilitates successful DevOps implementations, it adds another layer, which can become a target for potential threats. What's more, securing infrastructure and meeting compliance mandates i...
Providing the needed data for application development and testing is a huge headache for most organizations. The problems are often the same across companies - speed, quality, cost, and control. Provisioning data can take days or weeks, every time a refresh is required. Using dummy data leads to quality problems. Creating physical copies of large data sets and sending them to distributed teams of developers eats up expensive storage and bandwidth resources. And, all of these copies proliferating the organization can lead to inconsistent masking and exposure of sensitive data. But some organ...
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. The DevOps approach is a way to increase business agility through collaboration, communication, and integration across different teams in the IT organization. In his session at DevOps Summit, Chris Van Tuin, Chief Technologist for the Western US at Red Hat, will discuss: The acceleration of application delivery for the business with DevOps
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow. In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is relevant to small scale DevOps, and if there is an expectation of growth as the number of build targets,...
"ProfitBricks was founded in 2010 and we are the painless cloud - and we are also the Infrastructure as a Service 2.0 company," noted Achim Weiss, Chief Executive Officer and Co-Founder of ProfitBricks, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
"Alert Logic is a managed security service provider that basically deploys technologies, but we support those technologies with the people and process behind it," stated Stephen Coty, Chief Security Evangelist at Alert Logic, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
"We specialize in testing. DevOps is all about continuous delivery and accelerating the delivery pipeline and there is no continuous delivery without testing," noted Marc Hornbeek, Sr. Solutions Architect at Spirent Communications, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers. In his session at @DevOpsSummit, Haseeb Budhani, CEO and Co-founder of Soha, shared five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the friction and risks they impose on the business.
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Delphix, the market leader in Data as a Service (DaaS), has been announced winner of the DevOps Solution Award at the prestigious Computing Vendor Excellence Awards in London. The awards celebrate the achievements of the technology vendors and service providers that are leading the field of enterprise IT. Delphix was recognised as the vendor demonstrating the most effective support of DevOps culture for its ability to improve time to market and collaboration between teams.
Sysdig has announced two significant milestones in its mission to bring infrastructure and application monitoring to the world of containers and microservices: a $10.7 million Series A funding led by Accel and Bain Capital Ventures (BCV); and the general availability of Sysdig Cloud, the first monitoring, alerting, and troubleshooting platform specializing in container visibility, which is already used by more than 30 enterprise customers. The funding will be used to drive adoption of Sysdig Cloud in the container market.
SYS-CON Events announced today that JFrog, maker of Artifactory, the popular Binary Repository Manager, will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based in California, Israel and France, founded by longtime field-experts, JFrog, creator of Artifactory and Bintray, has provided the market with the first Binary Repository solution and a software distribution social platform.
"The new SDKs for Go and Java are yet another addition to our growing support for our DevOps community," said Achim Weiss, Co-founder and CEO of ProfitBricks. "Since the launch of ProfitBricks' DevOps Central, the productivity of the DevOps community remains a top priority for our development team. We've built a strong foundation for our DevOps Central users, and intend on continuing this momentum as the year progresses."
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. The problem is there are a lot of moving parts in these designs; this makes assuring performance compl...
Sumo Logic has announced comprehensive analytics capabilities for organizations embracing DevOps practices, microservices architectures and containers to build applications. As application architectures evolve toward microservices, containers continue to gain traction for providing the ideal environment to build, deploy and operate these applications across distributed systems. The volume and complexity of data generated by these environments make monitoring and troubleshooting an enormous challenge for development and operations teams. The Sumo Logic Collector and Application for Docker now a...
Shipping daily, injecting faults, and keeping an extremely high availability "without Ops"? Understand why NoOps does not mean no operations. Agile development methodologies require evolved operations to be successful. In his keynote at DevOps Summit, David Tesar, Microsoft Technical Evangelist on Microsoft Azure and DevOps, will discuss how Microsoft teams who have made huge progress with a DevOps transformation effectively utilize operations staff and how challenges were overcome. Regardless of whether you are a startup or a mature enterprise, whether you are using PaaS, Micro Services, or ...