Welcome!

DevOps Journal Authors: Elizabeth White, Pat Romanski, Mike Kavis, Roger Strukhoff, Yeshim Deniz

Related Topics: DevOps Journal, Java, Linux, Virtualization, Cloud Expo, SDN Journal

DevOps Journal: Blog Feed Post

Where Networks and Application Architecture Converge Lies DevOps

All key areas of IT are all converged on a singular focus: applications

On this side is a variant of SDN: network service virtualization (NSV). On the other side is an emerging application architecture: microservices. Where they meet lies devops.

One of the most fascinating things to watch in the technological shifts occurring today is to see them all converging on a singular point: applications. Whether it's securing or delivering, deploying or access, all key areas of IT are all converged on a singular focus: applications.

shifts center on applications

 

From a developer-turned-network-geek perspective, that's doubly interesting. That's because one impacts the other, and vice-versa. One of the trends in application architecture today is a shift toward microservices. I'll oversimplify for a moment and explain that as SOA without all the baggage. A recent post on High Scalability explains the architecture - and the impact on infrastructure requirements:

Where a monolithic application might have been deployed to a small application server cluster, you now have tens of separate services to build, test, deploy and run, potentially in polyglot languages and environments.

All of these services potentially need clustering for failover and resilience, turning your single monolithic system into, say, 20 services consisting of 40-60 processes after we've added resilience.

 

Throw in load balancers and messaging layers for plumbing between the services and the estate starts to become pretty large when compared to that single monolithic application that delivered the equivalent business functionality!

Microservices - Not A Free Lunch!

Now let's shift gears and peek at what's going on over in network land. You might recall we recently discussed network service virtualization. If not, here's a quick summary from Nick Lippis:

NSV seeks to virtualize enterprise appliances, such as firewalls, load balancers, application accelerators, application delivery controllers, Intrusion Protection Systems, WAN optimizers, call managers, etc., instantiated for each application. Each instance of each NSV is created for a specific application. That is, if there are 10 applications that require network services, then each application will be configured with its own instantiation of that service. That is, 10 applications, then 10 NSV firewalls.

In short, NSV seeks to virtualize network services by creating an instance of the network service for each application versus virtualizing a network service once for all applications. NSV hopes to present significant capex and opex relief from hardware appliances, as well as an efficient way of applying network services to applications without chaining or tagging packets and rapid automated, on-demand application deployment.

Lippis Report 217: It’s Network Service Virtualization in the Enterprise rather than Network Function Virtualization

Reading both, one might assume some level of collusion between the two but that's unlikely to be the case. The divide between application architects and networky groups is well established; they really don't play well together. And yet both these trends recognize the need to meet in the middle, in the L4-7 service layer, to provide for scalability and other "plumbing" services.

From a scalability perspective, this is very much a verticle partitioning-based scalability pattern, where load is spread across distinct functional boundaries of a problem space, each handled by different processing units. Those functional boundaries in today's architectures are embodied by microservice definitions. One service is responsible for a discrete function, as the point of microservices is, to a large extent, to decompose monolithic applications into individual, domain (functional) specific services.

Overall, this means services can be scaled individually on-demand, which is far more efficient than scaling a monolithic application. But it does introduce complexity, as there are necessarily more moving parts, and it does tend to complicate monitoring and force the need for more application-centric monitoring.

A Symbiotic Relationship
The application architect recognizes the need and, to some extent, laments the complexity it will introduce. Network service virtualization, on the other side, offers to fulfill the need and recognizes the need for efficiency and, ultimately, simplification in providing them in a "rapid automated, on-demand" fashion.

These issues - the plumbing and the monitoring - fall squarely into the realm of issues that can be resolved by applying devops to operations. Automated provisioning, treating infrastructure as code, and enabling a more holistic view of "applications" are all enabling capabilities of what devops aims to achieve.

For one of the first times I can remember, the operational burden imposed by technological shifts in application architecture is nearly simultaneously being addressed by the technological shifts in the network. In fact, one could argue that the shifts occurring in the network toward network service virtualization are actually enabling the shift in application architecture. Being able to rapidly provision, manage and monitor the L4-7 services necessary to deliver microservices increases the ability to take advantage of the architecture.

Like the question of the chicken and the egg, it really doesn't matter which came first. What matters is that they're complementary and both driving toward the same goal: accelerated application deployment and delivery of an exceptional end user experience.

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Stories from DevOps Journal
Achieve continuous delivery of applications by leveraging ElasticBox and Jenkins. In his session at DevOps Summit, Monish Sharma, VP of Customer Success at ElasticBox, will demonstrate how you can achieve the following using ElasticBox and the ElasticBox Jenkins Plugin: Create consistency across dev, staging, and production environments Continuous delivery across multiple clouds to handle high loads Ensure consistent policy management across environments: tagging, admin boxes, traceability Spin up machines and environments quickly Deploy applications to any cloud Enable real-time collaboration between developers and operations
Docker offers a new, lightweight approach to application portability. Applications are shipped using a common container format and managed with a high-level API. Their processes run within isolated namespaces that abstract the operating environment independently of the distribution, versions, network setup, and other details of this environment. This "containerization" has often been nicknamed "the new virtualization." But containers are more than lightweight virtual machines. Beyond their smaller footprint, shorter boot times, and higher consolidation factors, they also bring a lot of new features and use cases that were not possible with classical virtual machines.
High performing enterprise Software Quality Assurance (SQA) teams validate systems are ready for use – getting most actively involved as components integrate and form complete systems. These teams catch and report on defects, making sure the customer gets the best software possible. SQA teams have leveraged automation and virtualization to execute more thorough testing in less time – bringing Dev and Ops together, ensuring production readiness. Does the emergence of DevOps mean the end of Enterprise SQA? Does the SQA function become redundant?
WaveMaker CEO Samir Ghosh is taking a new pass at aPaas, and leveraging the increasingly popular Docker open-source platform, with the announcement of WaveMaker Enterprise. The new version of the company's eponymous software “enables instant, end-to-end custom web app creation and management by professional and non-professional developers (alike) and development teams,” according to the company. We asked Samir a few questions about this, and here's what he had to say: Cloud Computing Journal: You've mentioned the previous challenge of business-side developers making that jump from design to deployment. What sort of learning curve will they still face with Wavemaker Enterprise? Samir Ghosh: “Business-side developers” can include non-programming business users or professional developers under tight schedules or with limited mobile or front-end programming expertise. Both can use WaveMaker to meet their app development needs, but may have different deployment needs. I think business users just want their app to run as easily as possible. In WaveMaker, they can literally click a button and their application will run, either on our public cloud or on the enterprise’s private...
The old monolithic style of building enterprise applications just isn't cutting it any more. It results in applications and teams both that are complex, inefficient, and inflexible, with considerable communication overhead and long change cycles. Microservices architectures, while they've been around for a while, are now gaining serious traction with software organizations, and for good reasons: they enable small targeted teams, rapid continuous deployment, independent updates, true polyglot languages and persistence layers, and a host of other benefits. But truly adopting a microservices architecture requires dramatic changes across the entire organization, and a DevOps culture is absolutely essential.
Leysin American School is an exclusive, private boarding school located in Leysin, Switzerland. Leysin selected an OpenStack-powered, private cloud as a service to manage multiple applications and provide development environments for students across the institution. Seeking to meet rigid data sovereignty and data integrity requirements while offering flexible, on-demand cloud resources to users, Leysin identified OpenStack as the clear choice to round out the school's cloud strategy. Additionally, the school sought a partner to provide OpenStack infrastructure deployment and operations expertise. They ultimately selected Blue Box’s Private Cloud as a Service, powered by OpenStack, leveraging Blue Box's Zurich, Switzerland data center.
In a world of ever-accelerating business cycles and fast-changing client expectations, the cloud increasingly serves as a growth engine and a path to new business models. Dynamic clouds enable businesses to continuously reinvent themselves, adapting their business processes, their service and software delivery and their operations to achieve speed-to-market and quick response to customer feedback. As the cloud evolves, the industry has multiple competing cloud technologies, offering on-premises and off-premises cloud platforms for both Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). In parallel, cloud standards are also evolving, including community standards like OpenStack and CloudFoundry. Most organizations who are adopting the Cloud today are ending up adopting it in complex ‘dynamic-hybrid’ environments. There is physical infrastructure that now co-exists along with the new dynamic-hybrid on-premises and off-premises Cloud hosted environments.
This story came in from Joseph – one of our fellow dynaTrace users and a performance engineer at a large fleet management service company. Their fleet management software runs on .NET, is developed in-house, is load tested with JMeter and monitored in Production with dynaTrace. A usage and configuration change of their dependency injection library turned out to dramatically impact CPU and memory usage while not yet impacting end user experience. Lessons learned: resource usage monitoring is as important as response time and throughput. On Wednesday, July 3, Joseph’s ops team deployed the latest version into their production environment. Load (=throughput) and response time are two key application health measures the application owner team has on their production dashboards.
The recent trends like cloud computing, social, mobile and Internet of Things are forcing enterprises to modernize in order to compete in the competitive globalized markets. However, enterprises are approaching newer technologies with a more silo-ed way, gaining only sub optimal benefits. The Modern Enterprise model is presented as a newer way to think of enterprise IT, which takes a more holistic approach to embracing modern technologies. This model makes use of Composable Enterprise framework put forward by Jonathan Murray of WMG.
Software development, like manufacturing, is a craft that requires the application of creative approaches to solve problems given a wide range of constraints. However, while engineering design may be craftwork, the production of most designed objects relies on a standardized and automated manufacturing process. By contrast, much of moving an application from prototype to production and, indeed, maintaining the application through its lifecycle has often remained craftwork. In his session at DevOps Summit, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss the many lessons and processes that DevOps can learn from manufacturing and the assembly line-like tools, such as Platform-as-a-Service, that provide the necessary abstraction and automation to make industrialized DevOps possible.