Rethinking the Mesh Service with Application Traffic Management – The New Stack

Alan murphy

Alan is senior product manager of service mesh for NGINX at F5 Networks.

When many customers say they need a network of services, they do not speak of a real network of services. What they are really asking for is a better way to manage, observe, secure and evolve applications made up of microservices. To put it another way, they don’t want an excavator to lay the foundation; they want a hole in the ground. How they get the hole is not as important as the size and depth of the hole.

The lack of clarity on what is actually needed is a sign that all the hype around Kubernetes and the network of services has drowned out the concept of first principles. The most important element of cloud native applications is the “last mile” delivery of application traffic to end users, whether those users are human clients, servers, or IoT devices. For internal applications, customers are typically other teams within the organization that have departments comprised of microservice-based applications.

In the larger tech space outside of Kubernetes and the service mesh, the responsibility for last mile delivery lies with application delivery controllers (ADC), which is a multi-billion dollar industry with dozens of players. ADCs are the most essential contributor to ensuring a great user experience of applications. In other words, an ADC – and its ability to manage, shape and optimize the application (Layer 7) traffic – is the shovel that digs the hole in the ground. This is the way to provide the first principles of the application experience.

In the Kubernetes space, however, ADCs and Layer 7 traffic management are largely lacking.

In fact, despite the history and validation of the ADC market, application traffic management is perhaps the least developed element of the cloud native landscape. Kubernetes has traditionally focused on the network layer (Layer 4), layer 7 remaining an afterthought. This leaves platform operations teams to fend for themselves or use relatively untested solutions, even for mission-critical applications. So what can Kubernetes and service mesh architects do to improve their game and provide the robust management of application traffic required to ensure a superior user experience? Here are three suggestions.

1. Deploy a data plane with maximum Layer 7 capabilities

While there are a variety of options available for transport networking in Kubernetes (think container network interfaces), there is little application traffic management for Layer 7. By attaching a side- Because of the data plane, which has rich functionality for both Layer 4 and Layer 7 traffic, to your application containers, you can efficiently manage network and application traffic. The two types of traffic really have to be on an equal footing for all of the reasons we touched on above. A richer data plan allows you to focus on delivering the Layer 7 functionality that your architecture and composite applications need – security, observability, stability and resiliency – while also adding reliability and security to Layer 4. It just pushing packets at an acceptable rate is not enough. You wouldn’t just accept Layer4 functionality outside of Kubernetes, so why accept it in the Kubernetes and service mesh landscape?

2. Design your clusters for applications, not tools

You created your Kubernetes environment to host applications. Yet, when designing and sizing clusters, we often ask “How many Entrance controllers will i need? Rather than “What kind of support delivery services do my applications need?” Counting entrance controllers rather than considering services is like designing a skyscraper based on how many people pass through the front door, rather than what they have to do in their offices. Applications that must meet industry standards, such as in healthcare, financial services, and government, may require different application services rather than generic microservices. Even the same services can have radically different requirements in two different contexts. For example, a data streaming application is likely to have different availability and security requirements if it is the backend for continuous monitoring of patient vital signs rather than if it is tracking incoming weather data.

3. Consider deploying ADCs as part of your Kubernetes architecture

You’ve invested heavily in application security and resiliency everywhere else in your infrastructure: on-premises with traditional load balancers, virtualized with vADCs, and in the cloud with front-end native load balancers. So why not invest in the same capabilities and infrastructure in Kubernetes as it does for your service mesh? You plan to use Kubernetes to deliver production applications, right? The last mile can turn into the last centimeter in a containerized environment, but traffic management is still essential. ADCs are optimized to deliver applications and embody decades of wisdom learned in shaping, accelerating and filtering traffic.

Conclusion: an ADC is your shovel

Throughout their history, ADCs have evolved into the front-end traffic solution for every previous paradigm shift in IT infrastructure: bare-metal on-prem, top-of-rack co-lo, virtualization and finally cloud. computing. Kubernetes and containerization are the latest iteration, but the need to effectively manage application delivery to ensure a superior user experience and optimize the costs of running applications globally remains unchanged.

In Kubernetes, you still need content acceleration and caching, SSL termination, web application firewalls, and load balancing. These capabilities work best as complementary features within a unified traffic management solution. For Kubernetes, the needs will be the same, just with a slightly different form factor and with requirements to accommodate a more ephemeral and dynamic infrastructure. So consider equipping your Kubernetes cluster with cloud-native ADCs to improve application management and make service delivery smoother, more reliable and more secure.

Featured image provided by NGINX.

Comments are closed.