Azure Kubernetes Service (AKS) Network plugins

Kubernetes networking enables you to configure communication within your k8s network. It is based on a flat network structure, which eliminates the need to map ports between hosts and containers. Let’s  discuss different network plugins in AKS.

AKS cluster can be deployed using one of the network plugins

  • Kubenet (Basic) networking
  • Azure Container Networking Interface (CNI) networking

Kubenet (basic) networking

Kubenet is a very simple network plugin. It does not implement more advanced features like cross-node networking or network policy, kubenet networking option is the default network plugin for AKS. only  nodes receive an IP address in the virtual network subnet, Pods can’t communicate directly with each other, User Defined Routing (UDR) and IP forwarding is used for connectivity between pods across nodes. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:

v

Azure portal does not provide option to select vnet when using Kubenet, but can be deployed in existing subnet using Terraform or Arm Templates.

Terraform sample

# import existing subnet
data "azurerm_subnet" "subnet-aks" {
    name = "azsrini-AKS-SN"
    virtual_network_name = “azsrini-aks-vnet”
    resource_group_name = “azsrini-aks”
}

# create log analytics workspace
resource "azurerm_log_analytics_workspace" "azsrini-law" {
    name                = “azsrini-law”
    location            = “eastus”
    resource_group_name = “azsrini-aks”
    sku                 = “Standard”
}


# create cluster.
resource "azurerm_kubernetes_cluster" "azsrini-k8s" {
    name                = "azsrini-AKSCL"
    location            =””
    resource_group_name = “azsrini-aks”
    kubernetes_version  = "1.20.9"
    tags                =  [sample,Test]
    sku_tier            = "Free"
    dns_prefix          = "azsrini -aks"
    private_cluster_enabled = true

    linux_profile {
        admin_username = "azsriniaksuser"

        ssh_key {
            key_data = ” ssh-rsa AAAAB………………………………”
        }
    }

    identity {
         type = "SystemAssigned"
    }

    role_based_access_control {
        enabled = true
    } 

      default_node_pool {
        name            = "default"
        node_count      = 1
        vm_size         = “standard_d4s_v3”     
        os_disk_size_gb = "128"        
        vnet_subnet_id  = data.azurerm_subnet.subnet-aks.id
        node_labels      = {}
        availability_zones = [1,2,3]
        max_pods = 150
        enable_auto_scaling  = true
        max_count = 5
        min_count = 1
    }

    addon_profile {
        oms_agent {
        enabled                    = true
        log_analytics_workspace_id = azurerm_log_analytics_workspace.azsrini-law.id
        }
    }

    network_profile {
        load_balancer_sku = "Standard"
        network_plugin = "kubenet"
        outbound_type = "loadBalancer"
    }  
}

Azure supports a maximum of 400 routes in a UDR, so you can’t have an Kubenet AKS cluster larger than 400 nodes. 

Azure CNI (advanced) networking

When using Azure CNI every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front.

IP Exhaustion and planning with Azure CNI

we are facing IP exhaustion issue in our Non prod environment, plan carefully when ahead of time for for Azure CNI networking. The size of your virtual network and its subnet must accommodate the number of pods you plan to run and the number of nodes for the cluster.

IP addresses for the pods and the cluster’s nodes are assigned from the specified subnet within the virtual network. Each node is configured with a primary IP address. By default, 30 additional IP addresses are pre-configured by Azure CNI that are assigned to pods scheduled on the node. When you scale out your cluster, each node is similarly configured with IP addresses from the subnet.

Compare network models

Both kubenet and Azure CNI provide network connectivity for your AKS clusters. However, there are advantages and disadvantages to each. At a high level, the following considerations apply:

  • kubenet
    • Conserves IP address space.
    • Uses Kubernetes internal or external load balancer to reach pods from outside of the cluster.
    • You manually manage and maintain user-defined routes (UDRs).
    • Maximum of 400 nodes per cluster.
  • Azure CNI
    • Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
    • Requires more IP address space.

Conclusion

Both network model have advantages/disadvantages, Kubenet is very basic where as CNI is advanced and force networking ahead of time, might save lot of trouble later. If you do not have shortage of IP addresses, I would recommended to go with Azure CNI.

Helpful links : https://docs.microsoft.com/en-us/azure/aks/configure-kubenet ,
https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni
https://mehighlow.medium.com/aks-kubenet-vs-azure-cni-363298dd53bf

Azure Kubernetes Service (AKS) Network plugins

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s