Skip to main content

6 posts tagged with "azure"

View All Tags

Running a devcontainer in an Azure App Service

路 4 min read

I love Visual Studio Code devcontainers. I was curious if you could use an App Service with a custom container as a remote devcontainer.

What follows are instructions on how to get a remote devcontainer running in an App Service over SSH.


While this does appear to work, it seems a little fragile. The connection times out sporadically and I haven't used it to do development in earnest yet, so caveat emptor.

Creating an app service compatible devcontainer image#

In order to connect to your app service devcontainer via, you'll need to configure it's sshd as documented in the Configure a custom container for Azure App Service guide from Microsoft.

Port 2222ListenAddress 180X11Forwarding yesCiphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctrMACs hmac-sha1,hmac-sha1-96StrictModes yesSyslogFacility DAEMONPasswordAuthentication yesPermitEmptyPasswords noPermitRootLogin yesSubsystem sftp internal-sftp

In addition to the devcontainer setup, you'll need to add add some additional configuration to your image for it to work in an app service. Specifically you'll need to:

  • Add your sshd_config
  • Set the root password to Docker! Yes, this does feel wrong but it's inaccessible unless you use an Azure AD authenticated tunnel.
  • Add a webserver for the app service to host, in this case I'm using nginx
# use the default microsoft dotnet 5 devcontainerFROM
# add required packages including openssh-server and nginxRUN apt-get update \    && apt-get -y install --no-install-recommends apt-transport-https curl ca-certificates lsb-release gnupg2 openssh-server nginx
RUN echo "root:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directoryCOPY sshd_config /etc/ssh/
# Open port 2222 for SSH accessEXPOSE 80 2222
# Start up nginx and ssh and then sleep to keep the container aliveCMD /usr/sbin/service nginx start && /usr/sbin/service ssh start && /usr/bin/sleep infinity

Connecting to your devcontainer#

First, you'll need to create a tunnel to your app service devcontainer using the following command. Include the -p option to select a fixed port as this will make it easier to connect in future.

az webapp create-remote-connection --resource-group RESOURCE_GROUP_NAME -n APP_SERVICE_NAME -p 61000

Visual Studio requires that you use a key based authentication. Copy your ssh public key to your devcontainer using the following:

# if you're on windows, use git bash for thisssh-copy-id -i ~/.ssh/ -p 61000 root@localhost

To be able to pull from your git repository over ssh, you'll want to use an ssh-agent and ssh-agent forwarding. On windows, you can start your ssh-agent with the following command in a PowerShell session running as Administrator


Install the Remote - SSH Visual Studio Code Extension.

Once it's installed, click the Remote Explorer icon on the left side of your Visual Studio Code window and click the + button to add a new SSH connection.

Type in ssh -A root@localhost -p 61000

Then right click on the new localhost target in the list and click Connect to Host in New Window

Once you're connected, open up a terminal window and run:

mkdir /workspacescd /workspacesgit clone SSH_URL_FOR_YOUR_REPOSITORY

Then click Open Folder on the left and selected /workspaces/NAME_OF_YOUR_REPO and you should be good to go.

To see a working example of the setup, have a look at

How much will this cost?#

I've picked a relatively snappy P1v3 which has 2 cores and 8GB of RAM. Assuming a working year of approximately 48 weeks, working for 5 days a week with the machine running for half the day, a devcontainer running in an app service would cost 拢30.95 per developer.


My app service has restarted and I can't connect#

If your app service restarts, you'll need to copy your public ssh key again. Also, you'll have an old entry in your known_hosts file. If you try and ssh into your machine you'll see this delightful message:

$ ssh root@localhost -p 61000@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!Someone could be eavesdropping on you right now (man-in-the-middle attack)!It is also possible that a host key has just been changed.

Simply delete the offending line from your known hosts file. You could also switch off host key checking using StrictHostKeyChecking no and UserKnownHostsFile /dev/null in your ssh config file but you'd potentially open yourself up to man-in-the-middle attacks.

I'm having trouble getting SSH and Visual Studio Code working together#

Have a look here:

Building an Azure App Service with a custom container using Bicep

路 3 min read

Azure App Service with custom containers is a convenient way to host docker containers in Azure. While there are a number of tutorials on how to do this with az cli there aren't too many that show how to do it with ARM templates or Azure Bicep. Read on to see how to set up an Azure Container Registry, create scope maps and scope mapped tokens, build and push a custom image and finally use that custom image in your Azure App Service.

First, a word on scope maps and tokens#

Azure Container Registry has a number of authentication mechanisms for pulling docker images. The only one that lets you limit access to specific docker images are scoped mapped tokens which is why I'm going to use them in this example.


Scope Maps and Scope Map Tokens are currently in preview, so use with caution in production.

Setting up your ACR#

You'll need a Container Registry to host your custom container. The following bicep sets up a Premium Azure Container Registry. The reason I'm using a Premium SKU is so that I can create scoped maps and scope mapped tokens.

param location string = resourceGroup().location
resource acr 'Microsoft.ContainerRegistry/registries@2019-12-01-preview' = {  name: acrName  location: location  sku: {    name: 'Premium' // to support scope mapped tokens  }  properties: {    adminUserEnabled: false  }}

You may also want to assign an owner to your ACR

param ownerPrincipalId string
resource ownerRoleAssignment 'Microsoft.Authorization/roleAssignments@2018-01-01-preview' = {  name: guid('${}/${ownerPrincipalId}/owner')  scope: acr  properties: {    roleDefinitionId: '/subscriptions/${subscription().subscriptionId}/providers/Microsoft.Authorization/roleDefinitions/8e3af657-a8ff-443c-a75c-2fe8c4bcb635'    principalId: ownerPrincipalId  }}

And set up your scope maps and scope mapped tokens

// create a scope map for your repositoryresource bicepAppServiceContainerScopeMap 'Microsoft.ContainerRegistry/registries/scopeMaps@2020-11-01-preview' = {  parent: acr  name: 'bicepAppServiceContainer'  properties: {    actions: [      'repositories/bicep-app-service-container/content/read'      'repositories/bicep-app-service-container/metadata/read'    ]  }}
// create a token and associate it with your scope mapresource bicepAppServiceContainerToken 'Microsoft.ContainerRegistry/registries/tokens@2020-11-01-preview' = {  parent: acr  name: 'bicepAppServiceContainer'  properties: {    scopeMapId:    status: 'enabled'  }}

Generating passwords for your tokens#

The bicep above will create the scope map tokens but will not generate the passwords. You'll need az cli for that.

ACR_PULL_TOKEN=$(az acr token credential generate -n bicepAppServiceContainer -r $ACR_NAME --expiration-in-days 30 --query passwords[0].value -o tsv)

This will generate the passwords and set ACR_PULL_TOKEN to be the value of one of the generated passwords.

Setting up your App Service with a Custom Container#

Use the bicep below to create a linux App Service plan and an App Service that uses your custom container.

@description('The name of the app service that you wish to create.')param siteName string
@description('The docker registry hostname.')param dockerRegistryHost string
@description('The docker registry username.')param dockerUsername string;
@secure()@description('The docker registry password.')param dockerPassword string
@description('The relative docker image name.')param dockerImage string
var servicePlanName = 'plan-${siteName}-001'
resource servicePlan 'Microsoft.Web/serverfarms@2016-09-01' = {  kind: 'linux'  name: servicePlanName  location: resourceGroup().location  properties: {    name: servicePlanName    reserved: true  }  sku: {    tier: 'Standard'    name: 'S1'  }  dependsOn: []}
resource siteName_resource 'Microsoft.Web/sites@2016-08-01' = {  name: siteName  location: resourceGroup().location  properties: {    siteConfig: {      appSettings: [        {          name: 'WEBSITES_ENABLE_APP_SERVICE_STORAGE'          value: 'false'        }        {          name: 'DOCKER_REGISTRY_SERVER_URL'          value: 'https://${dockerRegistryHost}'        }        {          name: 'DOCKER_REGISTRY_SERVER_USERNAME'          value: dockerUsername        }        {          name: 'DOCKER_REGISTRY_SERVER_PASSWORD'          value: dockerPassword        }      ]      linuxFxVersion: 'DOCKER|${dockerRegistryHost}/${dockerImage}'    }    serverFarmId:  }}

To see a working example, have a look at the following repository

Keep your Azure Costs low by removing dev resources from resource groups on a schedule.

路 2 min read

I have a Visual Studio subscription that gives me 拢40.00 of Azure Credit each month to experiment in Azure. I find it relatively easy to spend this if I'm not careful about deleting resources after I've spun them up. Fortunately, I can use a GitHub Actions workflow on a schedule to save me when I forget.

I usually associate a set of resource groups with a repository in GitHub. I have a couple of options, I could delete the resource groups themselves or I could delete the resources in the resource groups. In this example, I've chosen just to delete the resources.

To do this, I use complete mode with an empty ARM template:

{    "$schema": "",    "contentVersion": "",    "resources": [    ]}

The GitHub Action then looks as follows:

name: destroy
on:  schedule:    - cron: "0 0 * * *"  # add workflow dispatch to be able to run trigger the workflow manually  workflow_dispatch:
  destroy:    runs-on: ubuntu-latest    steps:        - name: Azure Login      uses: azure/login@v1      with:        creds: ${{ secrets.AZURE_CREDENTIALS }}
    - uses: actions/checkout@v2
    - name: Destroy      id: deploy-acr      uses: azure/CLI@v1      with:        azcliversion: 2.23.0        inlineScript: |          az deployment group create --name destroy --resource-group NAME_OF_YOUR_RESOURCE_GROUP --template-file .github/workflows/empty.json --mode Complete

Check out the the following repo for a working example:

How to rotate your AZURE_CREDENTIALS in GitHub with Terraform

路 3 min read

If you're deploying your Azure infrastructure from GitHub, you'll need Azure service principal credentials stored as secret variables. You may also want to rotate those credentials. This entry describes how you can use Terraform in your GitHub actions to configure and rotate your Azure service principal credentials.

In a previous post, I describe how to safely rotate credentials using Terraform. This post builds on that by showing you how to create client secrets for your Azure service principals and store them as secrets in GitHub.

First off, we'll create a Terraform module that will manage the secret rotation in Azure and GitHub. This stores everything you need to be able to deploy from GitHub to Azure, including information about the credentials, service principal, tenant and subscription.

It stores them both as an AZURE_CREDENTIALS json block that matches the output of az ad sp create-for-rbac --sdk-auth which is useful for the Azure Login GitHub Action under the variable AZURE_CREDENTIALS as well as broken out into ARM_CLIENT_ID, ARM_TENANT_ID, ARM_SUBSCRIPTION_ID and ARM_CLIENT_SECRET which is useful for running Terraform in GitHub Actions.

The format of the sdk-auth that the Azure Login GitHub Action requires is as follows:

{  "clientId": "APPLICATION_ID",  "clientSecret": "CLIENT_SECRET",  "subscriptionId": "SUBSCRIPTION_ID",  "tenantId": "TENANT_ID",  "activeDirectoryEndpointUrl": "",  "resourceManagerEndpointUrl": "",  "activeDirectoryGraphResourceId": "",  "sqlManagementEndpointUrl": "",  "galleryEndpointUrl": "",  "managementEndpointUrl": ""}

The following files will create a Terraform module that will allow you to safely rotate secrets in Azure and store the currently active secret in GitHub secrets for use in GitHub Actions:

provider "azurerm" {  features {}}

variable "subscription_id" { type = string }variable "tenant_id" { type = string }variable "repository" { type = string }variable "application_id" { type = string }variable "application_object_id" { type = string }variable "date" { type = string }

locals {  date        = tonumber(  odd_keeper  = floor(( + 1) / 2)  even_keeper = floor( / 2)  use_even    = % 2 == 0}
resource "random_uuid" "odd" {}
resource "azuread_application_password" "odd" {  application_object_id = var.application_object_id  description           = "odd"  value                 = random_password.odd.result  end_date_relative     = "1440h"  key_id                = random_uuid.odd.result}
resource "random_password" "odd" {  keepers = {    "date" = local.odd_keeper  }  length  = 64}
resource "random_uuid" "even" {}
resource "azuread_application_password" "even" {  application_object_id = var.application_object_id  description           = "even"  value                 = random_password.even.result  end_date_relative     = "1440h"  key_id                = random_uuid.even.result}
resource "random_password" "even" {  keepers = {    "date" = local.even_keeper  }  length  = 64}
resource "github_actions_secret" "terraform" {  repository      = var.repository  secret_name     = "AZURE_CREDENTIALS"  plaintext_value = <<-EOT{  "clientId": "${var.application_id}",  "clientSecret": "${local.use_even ? random_password.even.result : random_password.odd.result}",  "subscriptionId": "${var.subscription_id}",  "tenantId": "${var.tenant_id}",  "activeDirectoryEndpointUrl": "",  "resourceManagerEndpointUrl": "",  "activeDirectoryGraphResourceId": "",  "sqlManagementEndpointUrl": "",  "galleryEndpointUrl": "",  "managementEndpointUrl": ""}EOT}
resource "github_actions_secret" "arm_client_id" {  repository = var.repository  secret_name = "ARM_CLIENT_ID"  plaintext_value = var.application_id}
resource "github_actions_secret" "arm_client_secret" {  repository = var.repository  secret_name = "ARM_CLIENT_SECRET"  plaintext_value = local.use_even ? random_password.even.result : random_password.odd.result}
resource "github_actions_secret" "arm_subscription_id" {  repository = var.repository  secret_name = "ARM_SUBSCRIPTION_ID"  plaintext_value = var.subscription_id}
resource "github_actions_secret" "arm_tenant_id" {  repository = var.repository  secret_name = "ARM_TENANT_ID"  plaintext_value = var.tenant_id}

Using the module#

module "example_github_azure" {  source                = "LOCATION_OF_MODULE"  subscription_id       = AZURE_SUBSCRIPTION_ID  tenant_id             = AZURE_TENANT_ID  repository            = GITHUB_REPO_NAME  application_id        = AZURE_APPLICATION_ID  application_object_id = AZURE_APPLICATION_OBJECT_ID  date                  =}

To see it in action, have a look at this repository

Microsoft Graph API Terraform Data Source

路 2 min read

Most Terraform examples that reference permissions from the Microsoft Graph use the GUIDs for the permissions. This makes it harder to write the Terraform config, as you have to look up the GUIDs for each permission. It also makes it harder to do code reviews where reviewers typically just believe the comment that describes what permission the GUID represents.

This is what the code for the required_resource_access for an azure ad application for k8s looks like using just the GUIDs (this is from a real example online):

required_resource_access {  resource_app_id = "00000003-0000-0000-c000-000000000000"  resource_access {    id   = "7ab1d382-f21e-4acd-a863-ba3e13f7da61"    type = "Role"  }  resource_access {    id   = "06da0dbc-49e2-44d2-8312-53f166ab848a"    type = "Scope"  }  resource_access {    id   = "e1fe6dd8-ba31-4d61-89e7-88639da4683d"    type = "Scope"  }}

Your AD tenant should have enterprise applications (service principals) for the various Microsoft services including the Microsoft Graph.

Microsoft Graph Enterprise Application

You can look this up as a data source using Terraform as follows:

data "azuread_service_principal" "graph" {    # graph api application id    application_id = "00000003-0000-0000-c000-000000000000"}

This will return a data source that has all of the oauth2 permissions and app roles for the Microsoft Graph. They can be awkward to work with, so I'll usually create a new object that maps the permission name to the permission id e.g.:

locals {    graph = {        application_id = data.azuread_service_principal.graph.application_id        app_roles = {for app_role in data.azuread_service_principal.graph.app_roles : app_role.value =>}        oauth2_permissions = {for oauth2_permission in data.azuread_service_principal.graph.oauth2_permissions : oauth2_permission.value =>}    }}

Using this, the example above would look like:

required_resource_access {  resource_app_id = local.graph.application_id  resource_access {    id   = local.graph.app_roles["Directory.Read.All"]    type = "Role"  }  resource_access {    id   = local.graph.oauth2_permissions["Directory.Read.All"]    type = "Scope"  }  resource_access {    id   = local.graph.oauth2_permissions["User.Read"]    type = "Scope"  }}

Note: this works for more than just the Microsoft Graph, you can use the same technique to look up app roles and permissions from any of your enterprise applications.

Bootstrap terraform state in azure

路 2 min read

It's a good idea to store your Terraform state in a remote backend because:

  • Terraform state often contains sensitive information e.g. credentials, access tokens etc. so using a backend that has access control and encrypts the state at rest and in transit will keep it safer.
  • Your Terraform state can be backed up
  • You can retrieve your state when running scripts in CI / CD pipelines.

If you're using Terraform to configure Azure resources, you'll probably want to use the azurerm terraform backend.

This stores your state in an Azure Storage Account. The following Terraform will create a storage account that can be used to store your Terraform state in Azure:

resource "azurerm_resource_group" "terraform_state" {  name     = var.resource_group_name  location = var.location}
resource "azurerm_storage_account" "terraform_state" {  name                     = var.storage_account_name  resource_group_name      =  location                 = azurerm_resource_group.terraform_state.location  account_tier             = "Standard"  account_replication_type = "GRS"  min_tls_version          = "TLS1_2"}
resource "azurerm_storage_container" "terraform_state" {  name                  = var.container_name  storage_account_name  =  container_access_type = "private"}
variable "resource_group_name" { type = string }variable "storage_account_name" { type = string }variable "container_name" { type = string }variable "location" { type = string }

To run it, you'll need to supply a file with the following variables set:


# the name of the resource group# e.g. "rg-mytfstate-shared-001"resource_group_name = ""
# the name of the storage account# e.g. "sttfstate001"storage_account_name = ""
# the name of the container# e.g. "tfstate"container_name = "tfstate"
# the location of the resouce group # and storage account e.g. "West Europe"location = "West Europe"

To apply the terraform run the following

# log into azure, this assumes you have sufficient # privileges to create resource groupsaz login
# initialise terraformterraform init
# show a planterraform plan -var-file=bootstrap.tfvars
# apply the terraformterraform apply -var-file=bootstrap.tfvars