Control Access for APIs with Azure API Management using Products, Subscriptions and Policies

We learned about creating and importing APIs to Azure API management in my last post, lets continue our journey with APIM.

What is product in APIM?

An APIM Product contains one or more APIs, it can provide customization like rate limiting and quotas on APIs access, users can subscribe to product via developer portal, once approved they can access all the APIs with in that Product using the subscription key. Subscription approval is configured at the product level and can either require administrator approval or be auto approved, products are a way of controlling access to APIs, for example when you have enterprise levels APIs , access can be limited to APIs by teams, Internal vs external users.

Let’s create a product

  1. Navigate to API Management Instance.
  2. In the left navigation, select products -> + Add
  3. In the Add Product window, enter values.
    State – Select Published if you want to publish the product, by default product are not published.
    Requires subscription – Select if a user is required to subscribe to use the product.
    Requires approval -Select if you want an administrator to review and accept or reject subscription attempts to this product. If not selected, subscription attempts are auto approved.

  4. Select APIs you want to add to this product or can be added later.

Add Subscription

Subscription to Product can be requested from developer portal or APIM admin can create a subscription for users.

  1. In the APIM left Navigation, Select Subscriptions -> + Add Subscription.
  2. Select Product for scope.
  3. Select Product Name that is created above.

Now that we added subscription to APIM products, users can access APIs using subscription key.

Add policies

Policies are a collection of statements that are executed sequentially on the request or response of an API , policies can be setup at API level or product level.

Let’s setup a sample ip-filter policy to allow access only from address.

Obviously my local ip is not 🙂 , Any calls from my machine should fail , Let’s try if the policy is working as expected.

BOOM!!! it worked…

Products and subscription are one way of controlling access , you can also add JWT token or Certificates authentication to restrict access APIs.

Control authentication for APIs with Azure API Management using Products, Subscriptions and Policies

Intro to Azure API Management Service (APIM)

Azure API Management is a fully managed service to publish, secure, transform, maintain, and monitor APIs, API Management handles all the tasks involved in mediating API calls, including request authentication and authorization, rate limit and quota enforcement, request and response transformation, Mocking logging and tracing, and API version management.

Azure API Management has 3 Main components.

  1. The API gateway is the endpoint that:
    • Accepts API calls and routes them to your back ends.
    • Verifies API keys, JWT tokens, certificates, and other credentials.
    • Enforces usage quotas and rate limits.
    • Transforms your API on the fly without code modifications.
    • Caches backend responses where set up.
    • Logs call metadata for analytics purposes.
  2. The Azure portal is the administrative interface to
    • Define or import API schema.
    • Package APIs into products.
    • Set up policies like quotas or transformations on the APIs.
    • Get insights from analytics.
    • Manage users.
  3. The Developer portal serves as the main web presence for developers or end-users, where they can:
    • Read API documentation.
    • Try out an API via the interactive console.
    • Create an account and subscribe to get API keys.
    • Access analytics on their own usage

In My organization , we use Developer portal instead of Swagger for API definitions and it works great for us.

Let’s  Create API management  using Azure portal

  1. Login to Azure, go to Create a Resource, select Integration -> API Management.
  2. This opens the API Management Creation blade. Fill out all the information and pick a appropriate pricing tier,
  3. monitoring tab, Select App insights in Monitoring, I would highly recommend turning this feature on.
  4. Scale tab, Developer and consumption tier doesn’t offer scaling.
  5. Managed identity, assign identity for APIM to access other resources.
  6. Virtual Network, select external or internal to deploy APIM inside virtual network, APIM is access over internet in external type and accessible only internally in internal type , default it none.
  7. Protocol settings, APIM supports multiple versions of TLS protocol for both client and backend.
  8. Enter tags and review and create, it takes some time for APIM to deploy, be patient 🙂

Import an Azure Function App as a new API

  1. Navigate to APIM service that is created above in the Azure portal and select APIs from the menu.
  2. In the Add a new API list, select Function App.

  3. Click Browse to select Functions for import.
  4. Click on the Function App section to choose from the list of available Function Apps.
  5. Find the Function App to import Functions from, click on it and press Select.

  6. Select the Functions you would like to import and click Select.

 Note – You can import only Functions that are based off HTTP trigger and have the authorization level setting set to Anonymous or Function

Test the new API in the Azure portal

You can call API operations directly from the Azure portal, which provides a convenient way to view and test the operations.

  1. In the left navigation of APIM instance, select APIs -> Apimanagement-fa.
  2. Select the Test tab, and then select getUsers. The page shows Query parameters and Headers, if any. The Ocp-Apim-Subscription-Key is filled in automatically for the subscription key associated with this API.
  3. Select Send.

Useful links

APIM is great very powerful tools to create consistent and modern API gateways for existing back-end services,we will learn more about APIM in future posts.

Thank you
Srinivasa Avanigadda

Intro to Azure API Management

Dependency injection with Azure functions

Dependency injection (DI) software design pattern, which is a technique for achieving Inversion of Control (IoC) between classes and their dependencies, it allows to develop loosely coupled code.

Azure Functions supports for  dependency injection (DI) started in 2.x, it is built on the .NET Core Dependency Injection features, so if you are used it, this should mostly look familiar to you.

Getting Started

Before you can use dependency injection, you must install the following NuGet packages in you Azure functions Projects

  • Microsoft.Azure.Functions.Extensions
  • Microsoft.NET.Sdk.Functions package version 1.0.28 or later

Create a class to register services, I called it startup, you can name it anything, create a method inside the class to configure and add components to an IFunctionsHostBuilder instance. The Azure Functions host creates an instance of IFunctionsHostBuilder and passes it directly into your method.

To register the method, add the FunctionsStartup assembly attribute that specifies the type name used during startup.

Using Injected Dependencies

Constructor injection is used to make your dependencies available in a function. The use of constructor injection requires that you do not use static classes for injected services or for your function classes

Service lifetimes

Azure Functions apps provide the same service lifetimes as ASP.NET Dependency Injection.

  • Transient: Transient services are created upon each request of the service.
  • Scoped: The scoped service lifetime matches a function execution lifetime. Scoped services are created once per execution. Later requests for that service during the execution reuse the existing service instance.
  • Singleton: The singleton service lifetime matches the host lifetime and is reused across function executions on that instance. Singleton lifetime services are recommended for connections and clients, for example DocumentClient or HttpClient instances.

Useful links

DI is great to write loosely couple code and make it easier for implementing Unit Tests.

Thank you
Srinivasa Avanigadda

Dependency injection with Azure functions

Using Bicep to create ARM Templates

I worked on both ARM templates and Terraforms for deploying Azure resources. ARM is Microsoft default Infrastructure as code language,  is extremely powerful.whereas Terraform  uses Hashicorp Configuration Language (HCL)  to build/configure infrastructure, Each section of the configuration file is human-readable and describes the desired resources to be implemented.

On September 8th (My birthday 😊) , Bicep was announced through a tweet by Mark Russinovich  and also invited for open source contribution. It appears Bicep is going to simplify usage of ARM templates , resource declaration can be done using human readable format. I want to build actual biceps but not yet ready to go to gym due to pandemic, I decided to stick with Bicep files for now 😊

Note: Bicep is not production ready, more breaking changes will be announced by the end of the year (2020). If you are interested in contributing to project please visit

What is Bicep (as mentioned on Github repo)?

Bicep is a Domain Specific Language (DSL) for deploying Azure resources declaratively. It aims to drastically simplify the authoring experience with a cleaner syntax and better support for modularity and code re-use. Bicep is a transparent abstraction over ARM and ARM templates, which means anything that can be done in an ARM Template can be done in bicep (outside of temporary known limitations). All resource typesapiVersions, and properties that are valid in an ARM template are equally valid in Bicep on day one.

Bicep compiles down to standard ARM Template JSON files, which means the ARM JSON is effectively being treated as an Intermediate Language (IL). Bicep is source-to-source compiler, Source code written in Bicep is compiled to equivalent ARM code(template), Similar to Babel , how it  converts ES5/6 to JavaScript.

Getting started.

We need two components to  build and Run Bicep Files

  • Bicep CLI (required)
  • Bicep VS Code Extension



# Create the install folder
$installPath = "$env:USERPROFILE\.bicep"
$installDir = New-Item -ItemType Directory -Path $installPath -Force
$installDir.Attributes += 'Hidden'
# Fetch the latest Bicep CLI binary
(New-Object Net.WebClient).DownloadFile("", "$installPath\bicep.exe")
# Add bicep to your PATH
$currentPath = (Get-Item -path "HKCU:\Environment" ).GetValue('Path', '', 'DoNotExpandEnvironmentNames')
if (-not $currentPath.Contains("%USERPROFILE%\.bicep")) { setx PATH ($currentPath + ";%USERPROFILE%\.bicep") }
if (-not $env:path.Contains($installPath)) { $env:path += ";$installPath" }
# Verify you can now access the 'bicep' command.
bicep --help
# Done!


# Fetch the latest Bicep CLI binary
curl -Lo bicep
# Mark it as executable
chmod +x ./bicep
# Add bicep to your PATH (requires admin)
sudo mv ./bicep /usr/local/bin/bicep
# Verify you can now access the 'bicep' command
bicep --help
# Done!


# Fetch the latest Bicep CLI binary
curl -Lo bicep
# Mark it as executable
chmod +x ./bicep
# Add Gatekeeper exception (requires admin)
sudo spctl --add ./bicep
# Add bicep to your PATH (requires admin)
sudo mv ./bicep /usr/local/bin/bicep
# Verify you can now access the 'bicep' command
bicep --help

Install the Bicep VS Code extension

  • Download the latest version of the extension
  • Open VSCode, and in the Extensions tab, select the options (…) menu in the top right corner and select ‘Install from VSIX’. Provide the path to the VSIX file you downloaded.

Let’s build some Biceps (files)

Create a empty file main.Bicep in VS code and compile it by running bicep build main.bicep.
And then the magic happens , A skeleton ARM JSON template file gets generated, Bicep Cli converts Bicep code into ARM code.

  "$schema": "",
  "contentVersion": "",
  "parameters": {},
  "functions": [],
  "variables": {},
  "resources": [],
  "outputs": {}

There are four main components in the resource declaration.

  1. Resource  
  2. Name- identifier to reference resource with in Bicep file
  3. Type – Type of resource
  4. Properties – resource properties
Resource bicepstg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
    name: 'azsrini-stg' 
    location: 'eastus'
    kind: 'Storage'
    sku: {
        name: 'Standard_LRS'

Now Compile the bicep file using bicep build main.bicep , ARM code is generated for storage resource in the Json file, At this point, I can deploy it like any other ARM template using the standard command line tools (az deployment or New-AzResourceGroupDeployment)

Adding parameters.

param location string = 'eastus'

resource stg 'Microsoft.Storage/storageAccounts@2019-06-01' = {
    name: azsrini-stg
    location: location
    kind: 'Storage'
    sku: {
        name: 'Standard_LRS'

Emitting Outputs to be passed to a script or another template

Deploying Bicep File.

Bicep files cannot yet be directly deployed via the Az CLI or PowerShell Az module. You must first compile the bicep file with bicep build then deploy ARM Json via deployment commands (az deployment or New-AzResourceGroupDeployment).

Azure PowerShell:

bicep build ./main.bicep # generates main.json
New-AzResourceGroupDeployment -TemplateFile ./main.json -ResourceGroupName azsrini-rg

Deploy with parameters

Our bicep file has one parameter that we can override (location) during the deployment

New-AzResourceGroupDeployment -TemplateFile ./main.json -ResourceGroupName azsrini-rg -location westus

ARM templates also support passing a Parameter file.

New-AzResourceGroupDeployment -TemplateFile ./main.json -ResourceGroupName azsrini-rg -TemplateParameterFile ./parameters.main.json


Bicep is not production ready and is actively developed, more features will be coming soon. Please follow Bicep Github library for updates, releases and More Feature.

Microsoft Ignite is next week , Sep 22 – 24, This year(2020) it’s completely virtual and Free, Attend to learn about Product Announcement and Roadmaps, and a oppurtunity to meet some legends in the industry.

Useful links :

Thank you
Srinivasa Avanigadda

Using Bicep to Create ARM Templates.

Deploying and Connecting to Ubuntu VM using Azure Portal.

Deploying and connecting to Linux machine is never been so easy on azure portal, AZURE automatically generated SSH key  and saves it in key storage for future use, Azure lets you download private key before deploying the VM, lets get started.

Create virtual machine

  • Login to Azure portal.
  • Go to Home, create a resource, Ubuntu Server or type Virtual machine in search and Click Add
  • In the Basics tab, under Project details, make sure the correct subscription is selected and then choose to Create new resource group or select existing.
  • Fill in all the basic details
  • Let’s do  SSH for authentication, you could use existing public key for SSH or New key can be create by Azure portal.
  • Under Inbound port rules > Public inbound ports, choose Allow selected ports and then select SSH (22) and HTTP (80) from the drop-down.
  • Leave the remaining defaults and then select the Review + create button at the bottom of the page.
  • On the Create a virtual machine page, you can see the details about the VM you are about to create. When you are ready, select Create.
  • When the Generate new key pair window opens, select Download private key and create resource. Your key file will be download as azsrini-vm_key.pem. Make sure you know where the .pem file was downloaded, you will need the path to it in the next step.

Connect to Virtual Machine

To connect to VM, you could use SSH, Bastion or RDP,

  1. Go to Virtual Machine that is newly created.
  2. select SSH and provide the private key path , Azure automatically generates Run command ,
  3. Run the command in CMD.

Boooooom Connected….


In this post we learned Deploying and connecting to Linux VM. Hope you enjoyed it , Stay safe.

Useful links :

Thank you
Srinivasa Avanigadda
Twitter : @azsrini

Deploying and Connecting to Ubuntu VM using Azure Portal.

GitHub Actions

GitHub Actions gained lot of traction recently and well poised to change the software deliver is clear at Recent Microsoft Build Event that GitHub is Microsoft’s first Priority, all new feature will be coming to GitHub first, Azure Devops is not going any where, but i think GitHub should be the first choice if you are starting up new project.

GitHub actions lets you Build , Test and deploy code directly with in the repository, i personally like this ,way better that using a external CI/CD tools.GitHub Actions should be the first choice if you are using GitHub.As of writing this blog (June 2020) , Actions are not yet available in GitHub Enterprise but expected by Summer 2020.

Workflows are custom automated processes that you can set up in your repository to build, test, package, release, or deploy any code project on GitHub.collection of Actions make a workflow like checkout.Workflows are configured  using YAML syntax , and should be stored   in the .github/workflows directory in the root of your repository.

Getting started..

GitHub repository has actions tab.

GitHub Actions page has recommended Actions based on the project Type , Mine is .Net core application so my recommendations were .Net Core Project Workflow ,That is slick. lets go ahead with .Net Core workflow by click “Setup this Workflow”. and a yaml files gets added to Workflow directory.

Work Flow Walk through.

    branches: [ master ]
    branches: [ master ]

On push master triggers workflow for changes committed to master or Pull request Merges into master.workflow can also be triggered based on schedule.

    - cron: '0 * * * *'


you can run actions on Github hosted runner or self hosted agents.

runs-on: ubuntu-latest

Check out:

Check out action is standard action that must be included in workflow before all the other actions, check out action checks out code to runner.

    - uses: actions/checkout@v2

i kept workflow file simple , but in real world scenario there will be more actions with in workflow file Build, Test, publish Etc.

Upon committing the file workflow gets trigger.


i strongly believe GitHub action are the future.Thank you for reading.Stay well.

Useful links :

Thank you

Srinivasa Avanigadda

Twitter : @azsrini

GitHub Actions

Access Key Vault Secrets from Azure functions Using Managed Identity.

Azure Key Vault

Azure Key Vault is a tool for securely storing and accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, or certificates. A vault is a logical group of secrets.

i’m not going into Key Vault details in this post. please visit for more details about Key Vault.

Initially i was using Azure.Security.KeyVault.Secrets Client Library to retrieve secrets , this does the jobs well but have to authenticate and make connection to retrieve secrets. After some research i found out that there is much elegant way by using Managed identity.

Let’s Get started.

i have created a function app and key vault in my azure subscription and also added two secrets UserName and Password.

Managed Identity :

System assigned managed identity should be created for the function app to connect to Key Vault,A system assigned managed identity enables Azure resources to authenticate to cloud services without storing credentials in code.

Key Vault Access policy

Access policy needs to be added to Key Vault for the function app to read Add Access policy , select permission and then select Service principal, search for function name and add principal.

Now lets go Add Key vault references, Go to Function App Configuration and Create New Application Settings for Us.

Reference Syntax

A Key Vault reference is of the form @Microsoft.KeyVault({referenceString}), where {referenceString} is replaced by one of the following options:

Reference stringDescription
SecretUri=secretUriThe SecretUri should be the full data-plane URI of a
secret in Key Vault, including a version, e.g.,
The VaultName should the name of your Key Vault resource.
The SecretName should be the name of the target secret.
The SecretVersion should be the version of the secret to use.
Copied from :

when the Azure Function – key vault connection is made successfully a green check mark is displayed.

Now that secret reference is added to function app settings , values can be retrieve easily using GetEnvironmentVariable within azure function. i created a sample Httptrigger to read config values and return username and Password(strong password:-)) from Key vault.


In this post we learned how to reference key vault values from function apps without using key vault SDK.Hope you enjoyed it , Stay safe.

Useful links :

Thank you
Srinivasa Avanigadda
Twitter : @azsrini

Access Key Vault Secrets from Azure functions Using Managed Identity.

Azure Functions and B2C Authentication

I was working on Single page application built on Angular with Azure B2C for User Management , B2C is pretty simple to getting it up and running. My back end Apis are built using azure functions and wanted to authenticate function calls using same user JWT token.

Azure functions support different identity servers like Azure Active directory, Facebook twitter and google, this post is focused on Azure B2C but I think it would be same for all identity servers. Azure functions Authentication is handled by “Easy Auth” an App Service that sits on top on function apps, any request coming in should be authenticated.

Authentication can be turned On-Off easily, if the Authentication is turned every incoming request must be authorized.

Getting Started

Step 1

Create Azure B2C tenant, this can be done through the portal make sure to associate it with a subscription.

Instructions for creating in Azure B2C

Step 2:

B2C provides Multiple  User flows also called user Journeys we will create Sign up/Sign in User Flow for login in SPA

Create Signup/Sign in policy:

Azure B2C  “Signup and Sign in”  is to login in user or  self-register users.

Step 3 – Create an AD B2C Application

Application that is being protected needs to be created in Azure AD, Give it a name select web app and reply Url to for testing.usually the reply URL is the web application Url that is being authenticated

Step 4 – Enable authentication

Authentication (Easy Auth) can be enable but going into Azure function  settings  Authentication/authorization  Turn on
Next, we need to set up the Azure Active Directory authentication provider, for which we need to select “advanced” mode. There are two pieces of information that we need to provide. First is the client ID, which is the application ID of our application we created earlier in AD B2C. The second (issuer URL) is the URL of our sign up or sign in policy from AD B2C. This can be found by looking at the properties of the sign up or sign in policy in AD B2C.

Step 5:
Steps SPA Application to Use Azure B2C for User Management, capture the JWT token from login and pass in authorization header for authentication for function apps. MSAL makes it much easier for Azure B2C authentication.

Excellent Article on setting up Angular with B2C By Sam Bowen-Hughes


In this post we learned how to Authenticate Function App using B2C User token,Hope you enjoyed it , Stay safe.

Useful links :

Thank you
Srinivasa Avanigadda
Twitter : @azsrini

Azure Functions and B2C Authentication.