Welcome!

@DevOpsSummit Authors: Automic Blog, Elizabeth White, Mehdi Daoudi, Radu Gheorghe, Pat Romanski

Related Topics: @DevOpsSummit, Java IoT, Linux Containers, Containers Expo Blog, @CloudExpo, SDN Journal

@DevOpsSummit: Blog Feed Post

Minimal (Network) Exposure

A lesson from object-oriented principles that should be applied to the network

One of the primary principles of object-oriented programming (OOP) is encapsulation. Encapsulation is the way in which the state of an object is protected from manipulation in a way that is not consistent with the way the variable is intended to be manipulated. The variable (state) is made private, that is to say only the object itself can change it directly. Think of it as the difference between an automatic transmission and a standard (stick). With the latter, I can change gears whenever I see fit. The problem is that when I see fit may not be appropriate to the way in which the gears should be shifted. Which is how engines end up being redlined.

An automatic transmission, on the other hand, encapsulates the process of shifting gears (changing the state of the engine) and only major transitions - reverse, park, drive, neutral - are accessible by the driver.

In an object-oriented paradigm, this is how the state of an object is isolated and protected from manipulation in a way that may introduce instability or cause other logical errors in processing. If we modeled a car using an OOP object it might look like this:

car-modeledThe actual code within each of the OPERATIONS can manipulate the STATE of the car, but it does so using well-defined rules. You can't just reach in and change the state without going through the OPERATORS. Period.

What this provides is the means to do logic and error checking, ensures consistency, and means that every other part of the system that might touch that object can be assured of the integrity of its state.

So, what does all that have to do with networking and infrastructure and DevOps and SDN and... ?

Quite a bit, actually.

One of the reasons SDN and DevOps, for that matter, is so critical to the next generation of data center architectures is its ability to centralize the state of the network. Right now it's all over the place. Literally. The state of any given network is not well known because it's distributed across every router, switch and layer 4-7 service that comprises "the network." It makes troubleshooting difficult, to say the least, and there's no good way to predict how a change in the state of the network will impact, well, the network.

An object oriented approach to the network, in general, says "let's centralize state, and provide a single interface through which it can be modified." In other words, there is a single authoritative source for the state of a network.

This is the approach that Cisco has taken with its Application Centric Infrastructure (ACI). It's the approach that OpenFlow-based SDN takes, and it's generally the approach that devops is taking to treating infrastructure as code. That's the theory. To realize this in practice requires that "the network" be API-enabled, to allow the dissemination of state and telemetry necessary for the authoritative source to maintain an accurate view of the state of the network.

The danger, however, is that we end up with a bunch of infrastructure and network devices that expose state via APIs through which operators and engineers end up directly manipulating configurations. Unfortunately, it is often the case that changing one single variable on a network device can be as devastating as opening Pandora's Box. Because of the way in which different devices relate policies and ACLs and routing tables to different objects - IP addresses, VLANs, etc... - a single change can have significant repercussions.

Similarly, the order in which various objects are configured can impact the overall stability and success of a configuration. By offering up a highly granular API and leaving consumers to their own devices (ha! pun not intended, but not regretted, either) they can easily shoot themselves (or their network) in the foot.

Providing a more policy, application-focused approach to provisioning and configuration reduces the possibility of these incidents and improves the stability and therefore reliability of the network services upon which applications rely. Encouraging engineers and operators to leverage application-driven provisioning and management rather than a line by line, API call by API call functional approach reduces the API surface area required and returns to exploiting encapsulation to preserve and protect network state by ensuring consistency in how that state is changed.

This software-defined, API-driven approach is a whole new world for networking and operations because it deviates from the direct touch world in more ways than just moving from CLIs to APIs.  It's also about encapsulation; about moving from standard transmissions to automatics and trusting that the manufacturer of the car does indeed know best how and when to shift from one gear to another.

hat_tip

H/T: This article on encapsulation in code is a good read that doesn't require a lot of understanding of development that may be of interest (and helpful) to operators and network engineers http://java.dzone.com/articles/evil-getters-and-setters-where

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
The cloud market growth today is largely in public clouds. While there is a lot of spend in IT departments in virtualization, these aren’t yet translating into a true “cloud” experience within the enterprise. What is stopping the growth of the “private cloud” market? In his general session at 18th Cloud Expo, Nara Rajagopalan, CEO of Accelerite, explored the challenges in deploying, managing, and getting adoption for a private cloud within an enterprise. What are the key differences between what is available in the public cloud and the early private clouds?
"Splunk basically takes machine data and we make it usable, valuable and accessible for everyone. The way that plays in DevOps is - we need to make data-driven decisions to delivering applications," explained Andi Mann, Chief Technology Advocate at Splunk and @DevOpsSummit Conference Chair, in this SYS-CON.tv interview at @DevOpsSummit at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, Alex Lovell-Troy, Director of Solutions Engineering at Pythian, presented a roadmap that can be leveraged by any organization to plan, analyze, evaluate, and execute on moving from configuration management tools to cloud orchestration tools. He also addressed the three major cloud vendors as well as some tools that will work with any cloud.
"Logz.io is a log analytics platform. We offer the ELK stack, which is the most common log analytics platform in the world. We offer it as a cloud service," explained Tomer Levy, co-founder and CEO of Logz.io, in this SYS-CON.tv interview at DevOps Summit, held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
In his session at @DevOpsSummit at 19th Cloud Expo, Robert Doyle, lead architect at eCube Systems, will examine the issues and need for an agile infrastructure and show the advantages of capturing developer knowledge in an exportable file for migration into production. He will introduce the use of NXTmonitor, a next-generation DevOps tool that captures application environments, dependencies and start/stop procedures in a portable configuration file with an easy-to-use GUI. In addition to capturing configuration information between Development, Test and Production, the case study shows how NXTmonitor can create dependencies, automate health scripts and scalable performance groups to handle peak production loads.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of DevOps with containers. In addition, he will discuss known issues and solutions for enterprise applications in containers.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers with heavy investments in serverless computing, when most of the industry has its eyes on Docker and containers.
DevOps and microservices are permeating software engineering teams broadly, whether these teams are in pure software shops but happen to run a business, such Uber and Airbnb, or in companies that rely heavily on software to run more traditional business, such as financial firms or high-end manufacturers. Microservices and DevOps have created software development and therefore business speed and agility benefits, but they have also created problems; specifically, they have created software security issues.
"There is a huge interest in Kubernetes. People are now starting to use Kubernetes and implement it," stated Sebastian Scheele, co-founder of Loodse, in this SYS-CON.tv interview at DevOps at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the works because of misaligned incentives.
In his session at DevOps Summit, Tapabrata Pal, Director of Enterprise Architecture at Capital One, will tell a story about how Capital One has embraced Agile and DevOps Security practices across the Enterprise – driven by Enterprise Architecture; bringing in Development, Operations and Information Security organizations together. Capital Ones DevOpsSec practice is based upon three "pillars" – Shift-Left, Automate Everything, Dashboard Everything. Within about three years, from 100% waterfall, Capital One now has 500+ Agile Teams delivering quality software via Agile and DevOps practices.
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of DevOps with containers and the benefits. In addition, he discussed known issues and solutions for enterprise applications in containers.
SYS-CON Events announced today that Catchpoint, a leading digital experience intelligence company, has been named “Silver Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Catchpoint Systems is a leading Digital Performance Analytics company that provides unparalleled insight into your customer-critical services to help you consistently deliver an amazing customer experience. Designed for digital business, Catchpoint is the only end-user experience monitoring (EUM) platform that can simultaneously capture, index and analyze object-level performance data inline across the most extensive monitor types and node coverage, enabling a smarter, faster way to preempt issues and optimize service delivery. More than 350 customers in over 30 countries trust Catchpoint to strengthen their brand and grow their bu...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long development cycles that produce software that is obsolete at launch. DevOps may be disruptive, but it is essential.
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also discuss how the "Ops" side of DevOps is making their life easier and becoming invisible to developers for storage-related provisioning and application performance.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture.
“RackN is a software company and we take how a hybrid infrastructure scenario, which consists of clouds, virtualization, traditional data center technologies - how to make them all work together seamlessly from an operational perspective,” stated Dan Choquette, Founder of RackN, in this SYS-CON.tv interview at @DevOpsSummit at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
"There's a growing demand from users for things to be faster. When you think about all the transactions or interactions users will have with your product and everything that is between those transactions and interactions - what drives us at Catchpoint Systems is the idea to measure that and to analyze it," explained Leo Vasiliou, Director of Web Performance Engineering at Catchpoint Systems, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal today!
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
"Tintri was started in 2008 with the express purpose of building a storage appliance that is ideal for virtualized environments. We support a lot of different hypervisor platforms from VMware to OpenStack to Hyper-V," explained Dan Florea, Director of Product Management at Tintri, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Francisco, is developing the next generation of cloud monitoring required for microservices and DevOps.
SYS-CON Events announced today that Linux Academy, the foremost online Linux and cloud training platform and community, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Linux Academy was founded on the belief that providing high-quality, in-depth training should be available at an affordable price. Industry leaders in quality training, provided services, and student certification passes, its goal is to change lives by teaching Linux and cloud technology to the tens of thousands of students that learn at the Linux Academy.
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.