AKS networking 101: Configuring network policies

Azure Kubernetes Service (AKS) is a fully managed container orchestration platform. You can use it to simplify the deployment, scaling, and management of containerized applications in the Azure cloud infrastructure. AKS takes care of the underlying infrastructure and cluster management tasks so you can focus on building and running cloud-native applications.

AKS provides a fully managed Kubernetes environment, eliminating the need for manual cluster setup, maintenance, and upgrades. The environment includes a scalable and highly available networking infrastructure that supports Kubernetes network policies. Network policies provide a mechanism to manage the flow of traffic between pods. This ensures application security, optimizes network resource use, and allows for network segmentation at the pod level.

This article explores the essentials of AKS networking, providing real-world use cases that demonstrate how to effectively configure network policies to establish robust network security for pods in AKS clusters.

Primer on AKS networking

AKS network infrastructure enables seamless communication within Kubernetes clusters, allowing pods to interact with each other, access external services, and use various resources. This lets you distribute your application workloads efficiently across the cloud infrastructure, improving performance and availability for your users.

AKS networking infrastructure uses overlay networks to create a virtual network on top of the physical network. It enables pods to communicate with each other as if they were on the same physical network—even if they’re running on different nodes or regions.

Network plugins in AKS are responsible for implementing and managing the overlay network. These plugins interface with the Container Networking Interface (CNI), which handles network functions in each node, to configure and manage network connectivity for pods within the cluster.

AKS supports the Kubenet and Azure Container Network Interface (CNI) primary network plugins.

Kubenet (basic networking)

Kubenet is the default networking plugin in AKS. It’s a simple and lightweight plugin suitable for small to medium-sized clusters with basic networking requirements.

AKS Kubenet architecture Fig. 1: AKS Kubenet architecture

Fig. 1: AKS Kubenet architecture

In the Kubenet configuration, each node in the AKS cluster gets a private IP address, and pods on that node share this address. The pods communicate directly with each other using these internal IPs. While this approach is simple, it can lead to limitations in scenarios requiring more advanced networking features.

Azure CNI

Azure CNI is an advanced networking plugin that provides more flexibility and control over network configuration. It’s suitable for large-scale clusters with complex networking requirements, such as multi-tenancy, network segmentation, and integration with Azure services.

AKS CNI architecture Fig. 2: AKS CNI architecture

Fig. 2: AKS CNI architecture

In the Azure CNI setup, each pod obtains a unique IP address. This allows for more granular control over network policies, enabling improved network segmentation and enhanced security measures.

Other key components include:

  • Virtual network (VNet) — VNet is the underlying network infrastructure that allows communication between different resources within the AKS cluster.
  • Subnet — Within the VNet, AKS uses subnets to logically separate different components like the control plane, node pools, and other resources.
  • Service endpoints — Service endpoints expose services running inside the cluster to external clients. They provide a stable IP address and port for accessing the service, even if the underlying pods are scaled or moved.
  • Ingress and egress rules — These are rules used to control the flow of traffic to and from pods. They allow administrators to define the specific types of network communication permitted for each pod.

Pods and communication in AKS

Pods are the basic unit of deployment and management for containerized applications. Each pod contains one or more containers and shared resources such as CPU, memory, and storage. They isolate applications from other workloads running on the same node to ensure they run independently and securely.

AKS supports several options for communication between pods and other resources within the cluster. By default, pods can communicate with each other within the cluster. Kubernetes’ networking components, such as the kube-proxy service, manage network rules and routes and allow pods to interact with each other and other resources in the cluster.

Network policies enable you to define granular network rules that manage the traffic between pods. This provides micro-segmentation for pods, similar to how Network Security Groups (NSGs) provide micro-segmentation for virtual machines (VMs). The policies are essential for implementing security best practices, such as zero-trust network setups. They also enable advanced network configurations such as network segmentation, load balancing, and service discovery.

Before exploring how to configure network policies in AKS, you’ll create an AKS cluster. Once you set up the AKS cluster, you can start creating and managing network policies to secure and control pod communication in the cluster. To follow along, ensure you have an Azure subscription.

Setting up your AKS cluster

On the Azure portal overview page, click Create a resource.

In the Categories section, click Containers > Azure Kubernetes Service (AKS).

In the Basics tab, specify the following cluster details:

  • Azure Subscription — Select the Azure subscription you want to use.
  • Resource group — Select an existing resource group or create a new one.
  • Cluster preset configuration — For testing purposes, select the Dev/test preset. For production applications, choose a different preset as needed.
  • Cluster name— Provide a unique name for your AKS cluster.
  • Region — Select the region where you want to create your AKS cluster.

Click Review + create, review the cluster details, and click Create.

Access the cluster

To connect to and access your AKS cluster, navigate to the Azure portal and select the resource group containing your AKS cluster. Click the name of your AKS cluster to open its overview page.

On the overview page, click Connect. Then, in the left pane, click Open Cloud Shell.

Select your preferred shell (Bash or PowerShell) and set the environment.

Now, run the command below to retrieve the credentials for your AKS cluster and configure the local kubectl configuration file (~/.kube/config) to connect to the cluster. This allows you to manage the cluster and its resources using the kubectl terminal session.


az aks get-credentials --resource-group <your-resource-group-name> --name <your-aks-cluster-name>

Run the following command to set up your credentials and connect to the AKS cluster:


kubectl config use-context <your-aks-cluster>

Finally, run this command to view the number of active nodes in the cluster:


kubectl get nodes

You should see a list of active nodes in your cluster. The agentpool node is the default node added to the cluster when you create a cluster using the Azure portal. You can add additional nodes when creating the cluster or scale the cluster up or down as needed.

Nodes in an AKS cluster Fig. 3: Nodes in an AKS cluster

You have successfully connected to your cluster. Now you can manage your AKS cluster and its resources.

Creating an environment for network policy implementation

Before you can implement network policies in your AKS cluster, you must create a namespace. A namespace is a logical division of a Kubernetes cluster that allows you to organize and manage resources. You use this namespace to host and run pods and deploy containerized applications.

Create a namespace using the command below:


kubectl create namespace <name-space>

To test network policies in the AKS cluster, deploy a simple application using the tutum/hello-world image. This application helps verify that network policies are working as expected.

Run the following commands to deploy two pods in your namespace:


kubectl run <your pod1-name> --image=tutum/hello-world --namespace=<name-space> --labels=app=hello-world,role=backend --expose --port=80

kubectl run <your pod2-name> --image=tutum/hello-world --namespace=<name-space> --labels=app=hello-world,role=backend --expose --port=80

Run the command below to check the pods that are running:


kubectl get pods -n <name-space>

You should see two pods running:

Number of active pods in namespace Fig. 4: Number of active pods in namespace

The next section explores how to create and apply network policies to manage network traffic.

Understanding network policies in AKS

AKS clusters allow pods to communicate with each other without any restrictions by default. This means that any pod can send traffic to any other pod, regardless of whether or not the receiving pod is expecting or prepared to receive that traffic.

However, this open communication can pose security risks and make it challenging to implement secure access controls. For instance, malicious pods or compromised containers could exploit this unrestricted communication to interact with other pods, potentially gaining access to sensitive data or disrupting critical services.

Network policies are a Kubernetes feature available in AKS that enables the precise management and control of traffic flow between pods within clusters. They work by defining the network communication rules based on the required criteria that include attributes such as assigned labels, namespace, and IP addresses. The purpose is to dictate which pods can communicate with each other and to specify the allowed ports or protocols for such communication. Network policies provide an additional layer of security and isolation in the cluster.

Beyond security benefits, they have the potential to optimize cluster performance by minimizing unnecessary network traffic. By restricting communication between pods, network policies contribute to efficient network resource use, support faster operations, and reduce latency.

Structure of a network policy in AKS

AKS supports two network policy providers:

  • Azure Network Policy Manager — This is Azure’s implementation of network policies, which provides a simplified and integrated experience within AKS.
  • Calico Network Policies — This is a popular open-source networking and security solution used to implement network policies in AKS.

Regardless of the provider, the basic structure of a network policy includes the following sections:

  • apiVersion — This specifies the network policy API version used (for example, networking.k8s.io/v1).
  • kind — This specifies the type of Kubernetes resource, that is, network policy.
  • metadata — This contains metadata about the network policy, such as its name and labels.
  • spec — This encompasses the actual network policy rules. The rules are defined using a combination of pod selectors and ingress and egress rules.

Pod selectors define which pods are affected by a network policy. These selectors consist of key-value pairs assigned to pods as labels.

Here is an example of a pod selector that would target all pods that have the label app: test.


podSelector:
matchLabels:
app: test

Ingress and egress rules specify the permitted network traffic flow to and from the selected pods. Ingress rules determine the allowed inbound traffic that can enter the pods, while egress rules define the allowed outbound traffic that can leave the pods.

Typically, ingress and egress rules comprise the following fields:

  • from — from specifies the traffic source — for example, a namespace selector.
  • to — to specifies the traffic destination.
  • ports — ports specifies the allowed ports for the traffic.

Here’s an example of an ingress rule that allows traffic from all pods in the default namespace to reach pods with the label app: test on port 80:


ingress:
- from:
- namespaceSelector:
matchLabels:
default: ""
- to:
- podSelector:
matchLabels:
app: test
ports:
- port: 80

In addition to the basic examples highlighted above, you can use network policies for a variety of more advanced scenarios, including:

  • Controlling traffic based on IP addresses — You can use ingress and egress rules to specify which IP addresses or Classless Inter-Domain Routing (CIDR) ranges are allowed to communicate with pods. This allows you to restrict access to pods from specific networks or subnets.
  • Enforcing least privilege — You can limit pod access to only the resources and services they need. For example, you can specify that clients X and Y solely access a backend API named Z.
  • Segmenting the network into logical zones — You can create network policies that divide the cluster into logical zones, such as development, testing, and production. This helps to isolate traffic and prevent unauthorized access between different zones.

Implementing basic network policies

When setting up an AKS cluster using the Azure command line interface (CLI), you must define the network plugin and the specific network policy provider. By default, the Azure portal configures Kubenet as the network plugin and Calico as the network policy for the cluster.

Regardless of the chosen plugin and policy, the implementation is the same. You must define a set of rules that govern pod communication in your cluster. This involves creating a manifest YAML file that includes the pod selector, egress, and ingress rules.

A basic example is implementing a policy to restrict communication between the two pods you created initially in your namespace.

To do so, create a new manifest file and name it, network-policy.yaml in the Azure Shell terminal:


touch network-policy.yaml

Open the file in the Nano text editor:


nano network-policy.yaml

Then, include the following content:


apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: restrict-all-traffic
namespace: <name-space>
spec:
podSelector:
matchLabels:
app: hello-world
role: backend
ingress: []

Don’t forget to change the namespace attribute to your corresponding cluster namespace.

In this example, the policy restricts communication between the pods labeled with app: hello-world and role: backend. You can specify different labels as needed.

Ideally, if an attempt is made to establish a connection from, for instance, pod1, it should fail to connect to pod2. The objective is to disallow any inbound traffic between these pods.

Save your changes and exit the Nano text editor by pressing Ctrl + X, then selecting option Y.

To apply the network policy to your AKS cluster, run the command below:


kubectl apply -f network-policy.yaml

To verify that the network policy has been applied to the cluster, first, check the assigned IP addresses of the two pods:


kubectl get pods -n <name-space> -o wide

To test the network policy and verify that it is effectively restricting traffic between the pods, run the following command:


kubectl exec -it <pod1-name> --namespace=<name-space> -- wget -qO- http://<pod2-IP>

You should now see a timeout response logged in the terminal.

Basic network policy example connection timeout response Fig. 5: Basic network policy example connection timeout response

This indicates that the network policy is working correctly and regulates communication between the two pods.

Advanced network policy scenarios

As your AKS cluster grows in complexity and you deploy more applications and services in different pods, you may encounter scenarios where a basic network policy is no longer sufficient. In this case, you’ll need to restructure the policy to ensure it addresses the networking requirements of your cluster. A common example is enabling communication between specific pods and allowing external access to selected pods.

Enabling only specific pods to communicate

Microservices architecture has transformed the way applications are built and deployed. Each microservice, such as a core function or a dedicated application programming interface (API), operates independently in pods, communicating with other components and resources through a shared network infrastructure.

Managing the communication and traffic flow between the pods in the cluster is essential to ensure the entire system operates smoothly and securely. This may include detailing which pods communicate in the network. The rules can specify pod selectors to match specific pod names, labels, ports, or allowed IP addresses.

You can, for example, implement a policy that allows certain pods to communicate while restricting communication with others.

First, run the command below to create the third pod in your namespace, labeled as follows:


kubectl run <your pod3-name> --image=tutum/hello-world --namespace=<your-name-space> --labels app=hello-world,role=frontend

Update the YAML file with the following contents:


apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
namespace: <your-name-space>
spec:
podSelector:
matchLabels:
app: hello-world
role: backend
ingress:
- from:
- namespaceSelector: {}
podSelector:
matchLabels:
app: hello-world
role: frontend

In this case, the network policy specifically outlines the ingress rules that allow traffic exclusively from pods labeled app: hello-world, role: backend, and role: frontend. This configuration ensures that only pods with these specified labels are permitted to communicate with each other.

Run the command below to apply this network policy:


kubectl apply -f network-policy.yaml

Then, test the connection between the pods.


kubectl exec -it <pod3-name> -n <name-space> -- wget -qO- http://<pod2-IP>

Once the two pods establish a successful connection, you should be able to view the contents of an index.html file served by the hello-world application running on pod2.

Advanced network policy example of a successful pod connection response Fig. 6: Advanced network policy example of a successful pod connection response

Now, create a new test pod without labeling it with the allowed labels:


kubectl run <pod4-name> --image=tutum/hello-world --namespace=<your-name-space>

Then, try to connect from the fourth pod to one of the pods with the allowed labels.


kubectl exec -it <new-pod4-name> --namespace=<name-space> -- wget -qO- http://<pod2-IP>

You should see a connection timeout message, as shown below:

Advanced network policy example unsuccessful pod connection timeout response Fig. 7: Advanced network policy example unsuccessful pod connection timeout response

Since the fourth pod doesn’t have the allowed labels, it isn’t allowed to communicate with pod2. This confirms that the network policy is working as intended — regulating the network traffic between the specified pods.

Allowing external access to a selected pod

In certain situations, you want to enable external access to a particular pod or service outside the cluster. This is often necessary when dependencies extend beyond the Kubernetes environment.

There are several methods for allowing external access to pods in your clusters:

  • Load balancer — This creates a service that automatically provisions an external IP address and distributes traffic to the pods in a round-robin fashion.
  • Node ports — This exposes a port on each node in the cluster to allow external access to a specific pod.
  • Ingress controller — This advanced method allows you to configure rules for routing traffic to different services based on various criteria, such as the hostname or path.

The load balancer method is a straightforward approach to exposing a pod service to the Internet. To allow external access using the AKS load balancer service, you must create a deployment manifest file and a service manifest file. The deployment manifest specifies the pods and their configuration, while the service manifest exposes the pods to external traffic.

Create a test-deployment.yaml manifest file with the following contents:


apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
labels:
app: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: tutum/hello-world
ports:
- containerPort: 80

Then, create a test-service.yaml manifest file and include the following contents:


apiVersion: v1
kind: Service
metadata:
name: hello-world-service
spec:
selector:
app: hello-world
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer

Save and apply the two manifests


kubectl apply -f test-deployment.yaml
kubectl apply -f test-service.yaml

This creates a deployment hello-world image pod and exposes it as a LoadBalancer service.

To access the service externally, check its status, and get the external IP use:


kubectl get services <service-name>

The service obtains an external IP once it’s provisioned.

Deployed services in namespace Fig. 8: Deployed services in namespace

Now, you can access hello-world using the external IP on your browser.

Remember to clean up the resources you created in this tutorial to avoid incurring unnecessary Azure charges. To do so, run the command below to delete your resource group, including its resources.


az group delete --name <your-resource-group-name>

Alternatively, follow these steps to quickly delete resources directly from the Azure portal:

  • Navigate to the resource group containing your AKS cluster.
  • Select the resource group to view its details.
  • In the top toolbar, click on Delete resource group.
  • Enter the name of the resource group, then click on Delete to confirm and proceed with the deletion.

If you want to keep specific resources while removing others, you can delete individual resources within the resource group instead of deleting the entire group. Simply click the All resources service and select the specific resources you want to delete.

Handling challenges and troubleshooting network policy issues

One common challenge when working with network policies is ensuring that they’re applied correctly and aren’t causing unintended consequences.

To troubleshoot network policy issues, follow these common steps:

  • Sometimes, a simple indentation error in the manifest file can cause network policy issues. Typically, if the manifest file has any syntax issues, an error appears when you try to save the file. Review the network policy manifest file for any syntax errors.
  • Use the kubectl describe networkpolicies -n <name-space> command to view the details of your network policies, including the pod selectors, and ingress and egress rules.

Take a look at this example:

Applied network policies Fig. 9: Applied network policies

You can also use the kubectl logs <podname> -n <namespace> command to check the logs of your pods for any errors or warnings related to network connectivity.

Logs for a specific pod Fig. 10: Logs for a specific pod

It’s important to thoroughly test and validate your network policies in different scenarios to ensure that they’re working as expected and provide the desired level of security and performance for your AKS cluster.

Conclusion

This article discussed the fundamentals of network policies in AKS, including how to create and apply them to control the traffic flow and enforce security measures in AKS clusters. To further enhance your AKS network management capabilities, explore Site24x7’s Azure Kubernetes Service Monitoring Integration.

With this integration, you can proactively monitor your clusters’ infrastructure health, configure thresholds for various metrics, monitor resource use, and set up critical issue alerts for early detection and remediation. To start, sign up for a 30-day free trial account!

Was this article helpful?

Related Articles

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 "Learn" portal. Get paid for your writing.

Write For Us

Write for Site24x7 is a special writing program that supports writers who create content for Site24x7 “Learn” portal. Get paid for your writing.

Apply Now
Write For Us