Welcome!

@DevOpsSummit Authors: Pat Romanski, Liz McMillan, Elizabeth White, Yeshim Deniz, SmartBear Blog

Related Topics: Containers Expo Blog

Containers Expo Blog: Blog Post

Three Things You Didn't Know About BIG-IP

Some of the things you didn't really know about BIG-IP

There's a lot of things people know about F5 BIG-IP and a lot of things people think they know about BIG-IP and some things people don't know at all about BIG-IP*.

So I thought it'd be a good idea to talk about some of the things you don't know about BIG-IP or in some cases, the things you didn't really know about BIG-IP. Kind of a myth-busting post, if you will.

So without further ado, let's get onto the list, shall we? It's Friday, after all, and there's an Internet full of cat videos waiting to be watched.

1. BIG-IP is not hardware. hardware-vs-software-big-ip

Oh, I know, F5 delivers BIG-IP on hardware that's called BIG-IP XXXX so really, what's the difference?

BIG-IP is a product family, a brand if you will. It's a way to identify that "these products are related" and go together. We could call it just the XXX hardware platform, but because it's specifically designed to enhance BIG-IP software, it kind of makes sense to group them together under the same name. I mean, it's not like you're going to deploy anything other than BIG-IP software on BIG-IP hardware, right?

But you might (and probably will) deploy BIG-IP software in a virtual form factor (we support all major hypervisors - Citrix, VMware, KVM, Microsoft) - or in a cloud like AWS, Rackspace, Microsoft Azure or VMware vCloud Air. That's because the BIG-IP software is not reliant on BIG-IP hardware. Oh, it benefits from BIG-IP hardware because the hardware is designed to enhance the performance and scale of the BIG-IP software but it's not a requirement. The BIG-IP software is enhanced by, not dependent on, BIG-IP hardware.

And software it is. With over 15 years of development, it's a significant piece of software. And most of that code is dedicated to TMOS and the modules from which our application services are ultimately delivered. Some of the code is specific to BIG-IP hardware, in order to eke out the most performance and scale out of the system, but that code is abstracted enough that the bulk of the software is deployable just about anywhere.

But that doesn't mean you have to pair the two together. You can certainly enjoy the benefits of BIG-IP software (which include the extensibility of any other software platform) without simultaneously employing the use of BIG-IP hardware.

2. BIG-IP is not just a load balancer.

I know, surprise right? Granted, BIG-IP is almost universally synonymous with load balancing because that's where we started and well, it's really uber awesome at load balancing. But that's just one service out of a large (and growing) number of services available for BIG-IP. That's because BIG-IP is not just software, it's a software platform. And platforms are meant to be extended. In the case of BIG-IP that's through software modules that deliver one or more application services. BIG-IP APM (Access Policy Manager), for example, offers not only SSL-VPN services but cloud identity federation services and application access control as well as identity services and protocol gateway services.

I will not deluge you with a complete list, but trust me that there are a plethora of services spanning device, network and application foci to choose from. And the list keeps growing. For example, just this past year we added secure web and HTTP/2 gateway services. Because it's a platform, not a product.

BIG-IP software is based on a full-proxy architecture, meaning its got a dual stack - one for the client-side and one for the app-side. That gives it tremendous flexibility in how it can interact with application traffic and data. Sure, it can load balance the heck out of your apps like nobodies business (and with more efficacy and intelligence than any other solution out there) but it can also do just about anything an app can do, too because the separation of the stacks means it is, technically, an app itself. It's an endpoint, just like your app server.

Now, you can't write just anything and deploy it on BIG-IP software because the platform is for us to use to develop new services. But you can write code that runs within the context of any service and interact with the platform to gather statistics, change behavior and call out to other services to share or gather information important to the app or the service itself.

That's a far sight more than just a "load balancer", isn't it?

3. BIG-IP delivers application services which are not the same as application networking services.

I know this might seem pedantic, but it's an important distinction that needs to be made sooner rather than later. I'm not going to diagram sentences to explain this one, but when we say "application services" we mean "services for applications." When you say "application networking services" you are saying "networking services for applications". There's a big difference there in what that ultimately means. Networking services are those that connect, transport, and secure network traffic. When they're focused on applications it means that those services are acting on behalf of applications.

When we say "application services" we're talking about intermediate services that reside in the data path and offer application-specific functionality. Web application security, for example, must (if it's going to have any degree of efficacy) be application-specific. It's not just about transporting traffic from point A to point B, it's about performing a service on behalf of the application that improves its security, availability or performance. They aren't "networking" in the traditional sense that networking is about routing and switching and firewalling. They are networking in that they operate at the upper layers (4-7) of the OSI network stack. But operationally they are targeted applications themselves (see #1 above) that just happen to be located "in" the network because it makes sense to topologically deploy those services upstream from the application.

After all, when the point of a service is to prevent bad requests from consuming resources unnecessarily or compromising an application it makes sense to ensure that process happens before the request actually gets to the application.

Yes, BIG-IP also provides some application networking services, like acting as a protocol transition point - from SPDY or HTTP/2 to HTTP/1 and vice versa or from IPv4 to IPv6 and its reverse or from VXLAN to VLAN to NVGRE or whatever combination of SDN overlay protocol you're looking to use. But the bulk of services delivered by a BIG-IP are application services. No additional modifier required.

There you have it. Three things you (perhaps | mostly | almost) didn't know about BIG-IP that now you do. And we all know that knowing is half the battle.

The other half is red and blue lasers.

Happy Friday!

* If that sounds sort of like Bilbo Baggin's farewell speech at his 111th birthday party then I did it right.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@DevOpsSummit Stories
"Our strategy is to focus on the hyperscale providers - AWS, Azure, and Google. Over the last year we saw that a lot of developers need to learn how to do their job in the cloud and we see this DevOps movement that we are catering to with our content," stated Alessandro Fasan, Head of Global Sales at Cloud Academy, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Business Unit at CA Technologies, will share his vision about the true ‘DevOps Royalty’ and how it will take a new breed of digital cloud craftsman, architecting new platforms with a new set of tools to achieve it. He will also present a number of important insights and findings from a recent cloud and DevOps study – outlining the synergies high performance teams are exploiting to gain significant busin...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chief Architect at Cedexis, covered strategies for orchestrating global traffic achieving the highest-quality end-user experience while spanning multiple clouds and data centers and reacting at the velocity of modern development teams.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
When shopping for a new data processing platform for IoT solutions, many development teams want to be able to test-drive options before making a choice. Yet when evaluating an IoT solution, it’s simply not feasible to do so at scale with physical devices. Building a sensor simulator is the next best choice; however, generating a realistic simulation at very high TPS with ease of configurability is a formidable challenge. When dealing with multiple application or transport protocols, you would be looking at some significant engineering investment. On-demand, serverless computing enables developers to try out a fleet of devices on IoT gateways with ease. With a sensor simulator built on top of AWS Lambda, it’s possible to elastically generate device sensors that report their state to the cloud.