Helm charts for kubernetes

Helm is package manager for Kubernetes, helm deploys charts that are  packages of pre-configured kubernetes resources, think of helm as apt/yum/homebrew for Kubernetes.

Benefits of Helm
  • Find and use popular software packaged as Kubernetes charts.
  • Share applications as Kubernetes charts.
  • Create reproducible builds for Kubernetes applications.
  • Intelligently manage Kubernetes object definitions.
  • Manage releases of Helm packages.
INstalling Helm

Windowschoco install kubernetes-helm

macOSbrew install helm

Debian/Ubuntu

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
sudo apt-get install apt-transport-https --yes
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

please refer to Helm official documentation for more installation types.

Generate sample chart

Best way to create helm chart is by running “helm create command”, run the command in the directory of your choice.

       helm create azsrini (azsrini is name of my chart, use name of your choice)

Helm will create a new directory “azsrini” with file hierarchy shown below.

there are four main components in Helm:  

  • Chart.yaml
  • Values.yaml
  • Templates.yaml
  • Charts Directory for other charts

Chart.yaml has required information describing what the Chart is about, with a name and version. There is more information that can be included in the Chart.yaml, but that is the minimum. 


Values.yaml file is where you define values to be injected/interpreted by the templates, values are represented key value pair and can be defined as nested values, there can be multiple values.yaml for different environments/namespaces, example : customvalues.yaml and this file can be passed during install like helm install -f customvalues.yaml ./azsrini


Template is where Helm finds the YAML definitions for your Services, Deployments and other Kubernetes objects. Helm leverages Go Templating if there is conditional logic needing to be included in the Templates.


Charts Directory for other Chart is where dependencies or sub charts are placed.

Chart Dependencies :

In Helm, one chart can depend on another chart(s), These dependencies can be dynamically linked using the dependencies field in Chart.yaml or brought in to the charts/ directory and managed manually.

I have added sample dependency from azure helm samples, running “helm dependency update” will add dependency to charts folder, like the below

You could also add sub charts by going into charts folder and running “helm create sub-chart“, this will create entire chart hierarchy structure in charts folder.



Deploying Charts

Sample chart created above has two pods defined (including dependency) and can be deployed to kubernetes by running helm install command, I’m installing the sample chart on Azure kubernetes cluster, this will install template file and also dependencies

helm install {name of deployment} {chartlocation}
ex: helm install azsrini-helm ./azsrini

Pods and Services deployed using Helm

Debugging Helm Templates

Debugging templates can be tricky because the rendered templates are sent to the Kubernetes API server, which may reject the YAML files for reasons other than formatting. There are few commands that can help debug.

  • helm lint to verify chart follow best practices
  • helm install --dry-run --debug or helm template --debug to view yaml templates are deployed.
  • helm get manifest to view templates that are installed on the server.
Upgrade Helm package

when you have newer version of chart, helm deployment can be upgraded to new version by running helm upgrade command. Helm upgrade takes the existing release and upgrades to information provide. I updated version to 0.1.1 and appversion to 1.16.1 in chart.yaml and upgraded my deployment, helm list will show newer chart version and app version.

Rollback Helm deployment

Helm deployment can be rolled back to previous version easily by running helm rollback, you can use helm history command to view all the previous deployment and rollback to the desired revision.

Helm package

we have been working with unpacked charts so far, if you want to make chart public or deploy to repository, charts need to be packaged to tar file, all publicly available charts are tar files. charts can be packaged using helm package command, this will create tar file and save to working directory, helm names the package with the metadata in chart.yaml {name}-{version}.tgz. helm packages can be installed using same helm install command.

Helm Repository

Helm repository is place where helm package are stored and shared, it can be any http server that can serve YAML and tar files, checkout Artifact Hub for community distributed charts. there are several options for hosting chart repositories, like ChartMuseum(open source), GitHub, Hashicorp vault, JFrog antifactory, GCS bucket,S3 bucket or Azure storage.

Conclusion

I haven’t discussed in-depth about injecting values, template functions, control flows or scopes, I highly recommend to refer to Helm official documentation and GO template functions at https://godoc.org/text/template.

Useful links

I found starter tutorials at https://jfrog.com/blog/10-helm-tutorials-to-start-your-kubernetes-journey/ very helpful.
Helm official documentationhttps://helm.sh/
Helm commandshttps://helm.sh/docs/helm/helm/

Helm charts for kubernetes

Azure Kubernetes Service (AKS) Network plugins

Kubernetes networking enables you to configure communication within your k8s network. It is based on a flat network structure, which eliminates the need to map ports between hosts and containers. Let’s  discuss different network plugins in AKS.

AKS cluster can be deployed using one of the network plugins

  • Kubenet (Basic) networking
  • Azure Container Networking Interface (CNI) networking

Kubenet (basic) networking

Kubenet is a very simple network plugin. It does not implement more advanced features like cross-node networking or network policy, kubenet networking option is the default network plugin for AKS. only  nodes receive an IP address in the virtual network subnet, Pods can’t communicate directly with each other, User Defined Routing (UDR) and IP forwarding is used for connectivity between pods across nodes. The following diagram shows how the AKS nodes receive an IP address in the virtual network subnet, but not the pods:

v

Azure portal does not provide option to select vnet when using Kubenet, but can be deployed in existing subnet using Terraform or Arm Templates.

Terraform sample

# import existing subnet
data "azurerm_subnet" "subnet-aks" {
    name = "azsrini-AKS-SN"
    virtual_network_name = “azsrini-aks-vnet”
    resource_group_name = “azsrini-aks”
}

# create log analytics workspace
resource "azurerm_log_analytics_workspace" "azsrini-law" {
    name                = “azsrini-law”
    location            = “eastus”
    resource_group_name = “azsrini-aks”
    sku                 = “Standard”
}


# create cluster.
resource "azurerm_kubernetes_cluster" "azsrini-k8s" {
    name                = "azsrini-AKSCL"
    location            =””
    resource_group_name = “azsrini-aks”
    kubernetes_version  = "1.20.9"
    tags                =  [sample,Test]
    sku_tier            = "Free"
    dns_prefix          = "azsrini -aks"
    private_cluster_enabled = true

    linux_profile {
        admin_username = "azsriniaksuser"

        ssh_key {
            key_data = ” ssh-rsa AAAAB………………………………”
        }
    }

    identity {
         type = "SystemAssigned"
    }

    role_based_access_control {
        enabled = true
    } 

      default_node_pool {
        name            = "default"
        node_count      = 1
        vm_size         = “standard_d4s_v3”     
        os_disk_size_gb = "128"        
        vnet_subnet_id  = data.azurerm_subnet.subnet-aks.id
        node_labels      = {}
        availability_zones = [1,2,3]
        max_pods = 150
        enable_auto_scaling  = true
        max_count = 5
        min_count = 1
    }

    addon_profile {
        oms_agent {
        enabled                    = true
        log_analytics_workspace_id = azurerm_log_analytics_workspace.azsrini-law.id
        }
    }

    network_profile {
        load_balancer_sku = "Standard"
        network_plugin = "kubenet"
        outbound_type = "loadBalancer"
    }  
}

Azure supports a maximum of 400 routes in a UDR, so you can’t have an Kubenet AKS cluster larger than 400 nodes. 

Azure CNI (advanced) networking

When using Azure CNI every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front.

IP Exhaustion and planning with Azure CNI

we are facing IP exhaustion issue in our Non prod environment, plan carefully when ahead of time for for Azure CNI networking. The size of your virtual network and its subnet must accommodate the number of pods you plan to run and the number of nodes for the cluster.

IP addresses for the pods and the cluster’s nodes are assigned from the specified subnet within the virtual network. Each node is configured with a primary IP address. By default, 30 additional IP addresses are pre-configured by Azure CNI that are assigned to pods scheduled on the node. When you scale out your cluster, each node is similarly configured with IP addresses from the subnet.

Compare network models

Both kubenet and Azure CNI provide network connectivity for your AKS clusters. However, there are advantages and disadvantages to each. At a high level, the following considerations apply:

  • kubenet
    • Conserves IP address space.
    • Uses Kubernetes internal or external load balancer to reach pods from outside of the cluster.
    • You manually manage and maintain user-defined routes (UDRs).
    • Maximum of 400 nodes per cluster.
  • Azure CNI
    • Pods get full virtual network connectivity and can be directly reached via their private IP address from connected networks.
    • Requires more IP address space.

Conclusion

Both network model have advantages/disadvantages, Kubenet is very basic where as CNI is advanced and force networking ahead of time, might save lot of trouble later. If you do not have shortage of IP addresses, I would recommended to go with Azure CNI.

Helpful links : https://docs.microsoft.com/en-us/azure/aks/configure-kubenet ,
https://docs.microsoft.com/en-us/azure/aks/configure-azure-cni
https://mehighlow.medium.com/aks-kubenet-vs-azure-cni-363298dd53bf

Azure Kubernetes Service (AKS) Network plugins

Azure Kubernetes cluster across Availability Zones

Availability Zone is high-availability offering that protects applications and data from datacenter failures. Availability Zones are unique physical locations within an Azure region, each zone is equipped with independent power, cooling, and networking, there’s a minimum of three separate zones in all enabled regions.

AKS cluster nodes can be deployed across multiple zones with in Azure region for high availability and business continuity.

Limitations

  • Availability zones can  be defined only when creating the cluster or node pool.
  • Availability zone settings can’t be updated after the cluster is created.
  • Node size(SKU) selected must be available across all availability zones selected.
  • Azure Standard Load Balancers is required for clusters with availability zones enabled.
Create an AKS cluster across availability zones

When creating cluster using “az aks create” command,  --zones parameter defines which zones agent nodes are deployed into, etcd or the cluster APIs are spread across the available zones in the region during the creation. run the above command in azure cli to create 3 nodes in 3 different zones in eastus2 region, as my resource group( aks-az-rg) is in eastus2.

az aks create --resource-group aks-az-rg --name AKS-AZ-Cluster --generate-ssh-keys --vm-set-type VirtualMachineScaleSets --load-balancer-sku standard --node-count 3 --zones 1 2 3

Verify Node Distribution

Run “get-credentials” to setup kubeconfig and then verify the node details using kubectl describe command in bash, you can also verify on azure portal.

az aks get-credentials --resource-group aks-az-rg --name AKS-AZ-Cluster

kubectl describe nodes | grep -e "Name:" -e "failure-domain.beta.kubernetes.io/zone"

Three nodes are distributed across three zones, eastus2-1, eastus2-2 and eastus2-3,if you scale node pool Azure platform automatically distributes new nodes across zones.

az aks scale --resource-group aks-az-rg --name AKS-AZ-Cluster --node-count 5
Verify POd distrubtion

Let’s deploy image with three replicas and verify how pods are distributed, run the below command in azure cli to deploy NGINX with three replicas.

kubectl create deployment nginx --image=mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine

kubectl scale deployment nginx --replicas=3

By viewing nodes where pods are running, you see pods are running on the nodes corresponding to three different availability zones. run the below command in a Bash to verify node where the pod is running.

kubectl describe pod | grep -e "^Name:" -e "^Node:"

The first pod is running on node 3, which is located in the availability zone eastus2-1. The second pod is running on node 1 which corresponds to eastus2-2, and the third one in node 2, in eastus2-3. Without any additional configuration, Kubernetes is spreading the pods correctly across all three availability zones.

Conclusion

if you are running application with higher SLA on AKS , take advantage of high availability with Azure Availability Zones as a part of your comprehensive business continuity and disaster recovery strategy with built-in security, and flexible, high-performance architecture.

Helpful Links: https://docs.microsoft.com/en-us/azure/aks/availability-zones

Azure Kubernetes cluster across Availability Zones

Connect to Azure SQL Managed instance By Using Azure AD Authentication

Azure Active Directory authentication is a mechanism of connecting to Microsoft Azure SQL Database or other resources by using identities in Azure Active Directory (Azure AD), SQL password Authentication is not a very secured, AD login is more secured and provides password rotation and MFA.

Azure Active Directory authentication uses contained database users to authenticate identities at the database level, i.e. does not have access to master database and access is assigned at individual database level. Azure AD identity can be either an individual user account or a group. I will walk you through how to create container DB access to AD group in this post.

Create Active directory Group in Azure AD

  1. Create Active directory group for DB access in Azure AD, multiple group can be created to isolate database access or to separate read and Write permission
  2. Add users to AD group.

Create Contained database users

  1. Connect to Azure Database using SQL Server Management Studio, to create container users/group you need to login with AD credentials/group that is setup as Admin on Database Server.
  2. In my case, Admin is Active directory user who is also SQL Admin.


  3. Run the following command to create contained DB users
CREATE USER Azure_AD_principal_name ROM EXTERNAL PROVIDER


Example: CREATE USER [azsrini-DB-RW] FROM EXTERNAL PROVIDER;

Azure_AD_principal_name can be the user principal name of an Azure AD user or the display name for an Azure AD group

When you create a database user, that user receives the CONNECT permission and can connect to that database as a member of the PUBLIC role. Once you provision an Azure AD-based contained database user, you can grant the user additional permissions, azsrini-DB-RW is granted ‘CONTROL’ permissions

Connect to Database

Active Directory Integrated authentication

To connect to a database using integrated authentication and an Azure AD identity, the Authentication keyword in the database connection string must be set to Active Directory Integrated and the client application should be running on a domain joined machine.

string ConnectionString = @"Data Source=n9lxnyuzhv.database.windows.net; Authentication=Active Directory Integrated;"
SqlConnection conn = new SqlConnection(ConnectionString);
conn.Open();

Active Directory principal name and password

To connect to a database using AD name an password, the Authentication keyword must be set to Active Directory Password and the connection string must contain User ID/UID and Password/PWD keywords and values

string ConnectionString = @"Data Source=n9lxnyuzhv.database.windows.net; Authentication=Active Directory Password; UID=azsrini@azsrini.onmicrosoft.com; PWD=passwrod123";

Connect using SQL Management Studio

To connect from management studio, go to options, enter the database and then connect using the Active directory username/password

Useful Links:

https://github.com/toddkitta/azure-content/blob/master/articles/sql-database/sql-database-aad-authentication.md

https://docs.microsoft.com/en-us/azure/azure-sql/database/logins-create-manage

Conclusion:

AD authentication is secured way to connect to Database , AD takes care of password rotation and MFA, it also eliminated storing passwords in connecting client applications.

Thank you
Srinivasa Avanigadda

Connect to Azure SQL Managed instance By Using Azure AD Authentication

Azure SQL Server auto-failover groups

Auto-failover groups can be used for business continuity to switch database(s) to secondary location automatically in case of primary failure. failover group allows to manage the replication and failover of a group of databases on a server or all databases in a managed instance to another region.

Failover groups are built on top of Geo-Replication and provide automatic failovers for your applications, this is achieved by azure created listeners.

In addition, auto-failover groups provide read-write and read-only listener endpoints that remain unchanged during failovers. Whether you use manual or automatic failover activation, failover switches all secondary databases in the group to primary. After the database failover is completed, the DNS record is automatically updated to redirect the endpoints to the new region.

Best practices for SQL Database

The auto-failover group must be configured on the primary server and will connect it to the secondary server in a different Azure region. The groups can include all or some databases in these servers. The following diagram illustrates a typical configuration of a geo-redundant cloud application using multiple databases and auto-failover group.

Let’s Create Failover Group

  1. Go to existing Managed SQL server Instance or Create New on Azure.
  2. Click Failover groups under “Data management”


  3. Click on ‘Add Group’


  4. Enter failover group name, Select secondary server or create new if secondary doesn’t exists.
  5. Failover policy select “Automatic”, can be changed later if needed.
  6. Read/Write grace period is telling the system for wait for some specific period of time before initiating a failover, default is 1 hour.
  7. Click “create” to Add Failover group.

Test Failover(Manual)

To test manual failover, go to failover group and click ‘Failover’’, this will initiate the switch of roles for the primary and secondary, there is also “Forced Failover”, that initiates the failover immediately without waiting for the grace period.

Useful Links:

https://docs.microsoft.com/en-us/azure/azure-sql/database/auto-failover-group-overview

Conclusion:

To achieve business continuity, adding database redundancy is part of the solution, recovering application end-to-end after a catastrophic failure requires recovery of all components that constitute the service.

Thank you
Srinivasa Avanigadda

Azure SQL Server auto-failover groups.


Inject Azure Key vault secrets into Azure DevOps Pipeline.

Azure Key Vault helps to securely store and manage sensitive information such as keys, passwords, certificates, etc, this prevents from exposing confidential information in source code, When working with Azure DevOps, you might need to use sensitive information like Service Principals or API keys, you can integrate pipeline with key vault with few steps and read secrets securely without configuring in Build Pipeline.

There are two way to retrieve secrets from Azure Key Vault into Pipelines

  1. Pipeline Task – Secrets are available with in the pipeline only.
  2. Variables Groups – Secrets are available across all the pipelines.

In this Post, we will look into retrieving secrets using pipeline task , will look into Variable groups in another post.

Lets Get Started.

I already have Key vault created and added couple of sample key to use from with in Pipeline, please refer Microsoft documentation for Key vault creation.

Create Service Connection in Azure DevOps organization.

Service connections enable you to connect to external and remote services to execute tasks in a job. For example, Microsoft Azure subscription, to a different build server or file server, to an online continuous integration environment, or to services you install on remote computers.

Add Key Vault Task to Pipeline.
  1. Go to Project Settings in Azure DevOps Organization.
  2. Click Service Connection and Add New Service Connection. Fill in the parameters for the service connection. The list of parameters differs for each type of service connection – see the following list.


  3. Check Grant Access Permission to all Pipelines option to allow YAML pipelines use this service connection.
  4. Choose Save to create the connection.

Add Key Vault task to pipeline
  1. Go to project in Azure DevOps and create a new Yaml Pipeline line or select existing.
  2. Select show assistant to expand the assistant panel. This panel provides convenient and searchable list of pipeline tasks.


  3. Search for vault and select the Azure Key Vault task.
  4. Select and authorize the Azure subscription you used to create your Azure key vault earlier. Select the key vault and select Add to insert the task at the end of the pipeline. This task allows the pipeline to connect to your Azure Key Vault and retrieve secrets to use as pipeline variables.

  5. SecretsFilter ‘*’ retrieves all secrets , you can also add comma separate list to get specific secrets.
  6. KeyVault values can be referred in the Pipeline by using syntax $(secretname).
  7. I’m writing to text file and publishing for testing.
  8. Save Pipeline(don’t run yet).
Set up Azure Key Vault access policies
  1. Go to Key Vault you want to integrate with Pipeline.
  2. Under Settings Select Access policies.
  3. Select Add Access Policy to add a new policy.
  4. For Secret permissions, select Get and List.
  5. Select the option to select a principal and search for yours.
    A security principal is an object that represents a user, group, service, or application that’s requesting access to Azure resources. Azure assigns a unique object ID to every security principal. The default naming convention is [Azure DevOps account name]-[Azure DevOps project name]-[subscription ID] 

  6. Select Add to create the access policy.
  7. Select Save.
Run and test Secrets retrieval in Pipeline.
  1. Run the pipeline, that was created earlier.


  2. Return to pipeline summary and select the published artifact.
  3. Under Job select the secret.txt file to view it.


  4. The text file contains our secret: ClientId

Conclusion:
Secrets, Passwords should never be exposed in source code or pipelines. key Vault can be directly integrated with App Services, Function apps and Pipelines to retrieve secrets securely.

Inject Azure Key vault secrets into Azure DevOps Pipeline.

Festive Tech Calendar 2020:Azure Well Architected Framework

Festive Tech Calendar is a great community event, hosted by Gregor Suttie (Azure Greg) and Richard Hooper (Pixel Robots) every year. It’s all about sharing knowledge and helping people to learn new things, there were multiple sessions throughout month of December 2020, I’m very happy I was able to contribute as well.

This post is little delayed, I was very happy my session about Azure Well architected Framework was accepted and delighted to see myself among many MVPs, this was my first ever talk at global event and was so exciting 🙂

you can watch my session here.

Check out all contributions at https://festivetechcalendar.com/ or Festive Tech YouTube Channel

Useful links
https://docs.microsoft.com/en-us/azure/architecture/framework/

Azure Well Architected Framework

Logic Apps SQL Connector to detect record insertion and trigger workflow

Azure Logic Apps offer hundreds of connectors to provide quick access from Logic apps to events, data and actions across other apps, services, systems, protocols, and platforms.

Logic app work flow can be triggered when record is added/modified using SQL connector (this is not old school SQL trigger). let’s see how this can be implemented, I’m adding to another table in the same database but you get the idea..

Let’s get Started...

1) Go to Azure portal, create a new Azure Logic App and click logic app designer

2) In the search box, enter “SQL” as your filter. and pick When an item is created (V2).

3) Enter SQL connection details, Select Database and table, to watch for new records insertion.

4) logic app provides option to select how frequently to check for changes, I choose default.


5) Add next step to the logic app flow, select SQL Server and select Insert Row(V2) Action.

6) In the insert row step, add the SQL connection, enter database and destination table where data is inserted.

7) click ‘add new parameter’ and select columns you want to insert to destination table.


8) Go through each column  and select corresponding columns values from dynamic content (dynamic content populates all the values from trigger output, in our case inserted row data).

9) Save logic app workflow.

Our logic app is ready, when a new record is inserted in Users table the same record get inserted to Users2 table as well via Logic app workflow, Lets test the Logic app trigger and action.

Insert a sample record into Users table that logic app is monitoring.

Since my logic app is checking Users Table for changes every 3 mins, logic app workflow gets triggered within 3 mins.

record is inserted into Users2 table.

Conclusion:
Logic Apps provided number of connectors out of the box for different types of triggers and actions, check Microsoft website for more details about connectors.

Logic Apps SQL Connector to detect changes and trigger workflow

Notify Key Vault Status Changes using event grid subscription and logic apps

In one of my previous company(pre-Azure days) , i wrote a utility tool that scans through all the certificates in the cert store and sends out email notification to the team with list of certs that are expiring with in 30 days. It was extremely useful tool back in the days , using azure key vault events the same functionality can be achieved quite easily.

Azure Key Vault integration with Event Grid allows users to be notified when the status of a secret stored in key vault has changed, Notifications are supported for all three types of keys in Key vault (key, secret and certificate). Events are pushed through Azure Event Grid to event handlers such as Azure Functions, Azure Logic Apps, or Web-hooks. Key Vault events contain all the information needed to respond to changes in data.

Types of event supported

  1. Certificate New Version Created
  2. Certificate Near Expiry
  3. Certificate Expired
  4. Key New Version Created
  5. Key Near Expiry
  6. Key Expired
  7. Secret New Version Created
  8. Secret Near Expiry
  9. Secret Expired
  10. Vault Access Policy Changed

Let’s integrate logic apps with event grid to receive notification when the new Key version is created using Secret New Version Created event.

  1. Go to key vault, select events , Get Started and click Logic Apps.


  2. On Logic Apps Designer validate the connection and Continue.


  3. On the When a resource event occurs screen, do the following
    • Leave Subscription and Resource Name as default.
    • Select Microsoft.KeyVault.vaults for the Resource Type.
    • there are multiple event available,Select Microsoft.KeyVault.SecretNewVersionCreated for Event Type Item – 1.Select + New Step This will open a window to Choose an action.

  4. Search for Email . Based on your email provider, find and select the matching connector. i’m using  Office 365 Outlook .Gmail is not supported with event grid binding.
  5. Select the Send an email (V2) action.
  6. Build your email template
    • To: Enter the email address to receive the notification emails.
    • Subject and Body: Write the text for your email. Select JSON properties from the selector tool to include dynamic content based on event data. You can retrieve the data of the event using @{triggerBody()?['Data']}.
  7. Click Save as , Enter logic app name and click Create.

Lets Test.

  1. Go to your key vault and select Events > Event Subscriptions , you would see a logic app subscription.
  2. Go ahead and add new version to existing secret or Add new Secret.
  3. once the secret is created , event gets trigger and an email is sent to configured email.

Conclusion
monitoring key vault using event is very useful, key rotations can be automated using event handler , certificate expiration can be notified.i will explain auto rotation of keys using even subscription in upcoming posts.

Useful links
https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources

Notify Key Vault Status Changes using event grid subscription and logic apps

Prevent Accidental Deletion of Resources in Azure with Locks

Role based access (RBAC) provides granular control over access to resources but it cannot prevent accidental deletions or modification to resources. organizations may need to lock a subscription, resource group, or resource to prevent accidental deletions or modifications to critical resources. I work in very in a large team and found locks very helpful to prevent unwanted modification to NSGs and Route tables that are shared across resources with in same subnet.

Let’s get started

There are two types of locks in azure.

  • CanNotDelete (Delete), Users can read and modify resources but can’t delete the resource.  
  • ReadOnly , Users can read a resource, but can’t delete or update the resource.

Locks can be applied at subscription level, resource group level or to the resource directly, locks at the parent scope gets inherited by all children, i.e. , if a lock is applied at resource group level all resources within the resource group inherits the lock, also applies to any new resources created later in resource group.

Permission Required

Microsoft.Authorization/* or Microsoft.Authorization/locks/* permission are required to create or delete locks.

Let’s add lock to a resource.

  1. In the resource panel under Settings, click Locks.
  2. In the Locks pane, click + Add.

  3. Enter lock name, and then select Read-only or Delete from the Lock type menu.
  4. Enter description in the Notes field(Optional).

  5. click OK.

Since Delete lock is added on my VM, Azure will prevent it from deleting until lock is removed.

Conclusion
Locks are very useful when you are working in large team or to prevent surprises to prod environment. locks can be setup using ARM Templates , PowerShell or Azure CLI, refer to below URL for more information.

Useful links
https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/lock-resources

Prevent Accidental Deletion of Resources in Azure with Locks