Welcome!

@DevOpsSummit Authors: Pat Romanski, Liz McMillan, Elizabeth White, Yeshim Deniz, SmartBear Blog

Related Topics: Containers Expo Blog, @CloudExpo, SDN Journal

Containers Expo Blog: Blog Post

Two Options for Web Content Filtering at the Speed of Now | @CloudExpo #SDN #Cloud #Virtualization

To ensure service level and capacity as internet traffic increases, organizations need higher-speed networks

Two Options for Web Content Filtering at the Speed of Now

Because not everything the internet offers is suitable for all users, organizations use web filters to block unwanted content. However, filtering content becomes challenging as networks speeds increase. Two filtering architectures are explored below, along with criteria to help you decide which option is the best fit for your organization.

How Fast Can You Filter?
To ensure service level and capacity as internet traffic increases, organizations need higher-speed networks. In telecom networks, to serve hundreds of thousands of users, 100 Gbps network links are introduced to keep up with the demand. Today, the market has reached a state of maturity regarding solutions for web content filtering at 1 Gbps and 10 Gbps, but filtering at 100 Gbps poses a whole set of new challenges.

To filter content this quickly, the system must expend a huge supply of processing power. Furthermore, there is a need for distribution of traffic across available processing resources. This is usually achieved with hash-based 2-tuple or 5-tuple flow distribution on subscriber IP addresses. In telecom core networks, subscriber IP addresses are carried inside GTP tunnels and, consequently, support for GTP is required for efficient load distribution when filtering traffic in telecom core networks.

Building Filtering Capacity
There are two choices to meet the needs of high-speed filtering by processing resources and providing load distribution. The first option is a stacked, distributed server solution. It is comprised of a high-end load balancer and standard COTS servers equipped with several 10 Gbps standard NICs. The load balancer connects in-line with the 100 Gbps link and load distributes traffic to 10 Gbps ports on the standard servers. The load balancer must support GTP and flow distribution based on subscriber IP addresses.

Because the load balancer cannot guarantee a100 percent even load distribution, there is a need for overcapacity on the distribution side. A reasonable solution comprises 24 x 10 Gbps links. For this solution, three standard servers, each equipped with four 2 x 10 Gbps standard NICs, in total provide the 240 Gbps traffic capacity (3 x 4 x 2 x 10 Gbps). 24 cables for 10 Gbps links round out the solution.

Though the load balancer is costly, the standard COTS servers and standard NICs offset the expense with their reasonable cost. The solution involves many components and complex cabling. Furthermore, the rack space required is relatively large, and system management is complex due to the multi-chassis design.

The second option consolidates load distribution, 100G network connectivity and the total processing power in a single server. This is called a single, consolidated server solution, and it requires a COTS server and two 1 x 100G Smart NICs. Since up to 200 Gbps traffic needs to be processed within the same server system, the server must be equipped with multiple cores for parallel processing. For example, a server with 48 CPU cores can run up to 96 flow processing threads in parallel using hyper-threading.

To fully use CPU cores, the Smart NIC must support load distribution to as many threads as the server system provides. Also, to ensure balanced use of CPU cores, the Smart NIC must support GTP tunneling. The Smart NIC should also support these features at full throughput and full duplex 100 Gbps traffic load, for any packet size.

This single-server solution has multiple benefits. It provides a one-shop system management, where there are no complex dependencies between multiple chassis. The cabling is simple due to single component usage. The footprint in the server rack is very low, thereby reducing rack space hosting expenses.

Determining Factors
The technical specifications for a high-speed web filtering solution are important, but so is the total cost of ownership. Here are some significant parameters for operations expenditure (OPEX) and capital expenditure (CAPEX) calculations. For OPEX, consider rackspace hosting expenses, Warranty and support, and power consumption - including cooling - for servers, NICs and load balancers. CAPEX considerations include the costs of software, servers and Smart NICs or standard NICs.

So, which web content filtering option is right for your organization? It depends on your use case. The difference in costs between the two options will certainly be a determining factor, so consider carefully which method will best serve your needs and those of your customers. If your situation is better served by a simplified, consolidated method, take a closer look at how Smart NICs can provide the support for the speed you need.

More Stories By Sven Olav Lund

Sven Olav Lund is a Senior Product Manager at Napatech and has over 30 years of experience in the IT and Telecom industry. Prior to joining Napatech in 2006, he was a Software Architect for home media gateway products at Triple Play Technologies. From 2002 to 2004 he worked as a Software Architect for mobile phone platforms at Microcell / Flextronics ODM and later at Danish Wireless Design / Infineon AG.

As a Software Engineer, Sven Olav started his career architecting and developing software for various gateway and router products at Intel and Case Technologies. He has an MSc degree in Electrical Engineering from the Danish Technical University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@DevOpsSummit Stories
For better or worse, DevOps has gone mainstream. All doubt was removed when IBM and HP threw up their respective DevOps microsites. Where are we on the hype cycle? It's hard to say for sure but there's a feeling we're heading for the "Peak of Inflated Expectations." What does this mean for the enterprise? Should they avoid DevOps? Definitely not. Should they be cautious though? Absolutely. The truth is that DevOps and the enterprise are at best strange bedfellows. The movement has its roots in the tech community's elite. Open source projects and methodologies driven by the alumni of companies like Netflix, Google and Amazon. This is a great thing for the evolution of DevOps. It can be alienating for Enterprise IT though. Learning about Netflix and their simian armies, or Facebook and their mind-melting scale is fascinating. Can you take it back to the office on Monday morning though?
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so you can decide for yourself.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examining how the Internet and the cloud has allowed for the democratization of IT, resulting in an increased demand for the cloud and the drive to develop new ways to utilize it.
Kubernetes is a new and revolutionary open-sourced system for managing containers across multiple hosts in a cluster. Ansible is a simple IT automation tool for just about any requirement for reproducible environments. In his session at @DevOpsSummit at 18th Cloud Expo, Patrick Galbraith, a principal engineer at HPE, discussed how to build a fully functional Kubernetes cluster on a number of virtual machines or bare-metal hosts. Also included will be a brief demonstration of running a Galera MySQL cluster as a Kubernetes application.