Table of Contents
Terraform in Azure DevOps
Summary: On this page I'll show you how to use an azure devops pipeline to deploy azure resources using terraform.
Date: 2 February 2025
The topics covered are:
- Using a service principal to authenticate to Azure
- Setting up a remote backend for your tfstate file
- Using a federated service principal to authenticate to Azure
- Using the Azure DevOps Pipeline Extension
Service Principal
As this post is about terraform I won't go too deep into creating the service principal. You should log in to the Azure Portal, go to Entra ID and create a new app registration. If you do so manually in the portal, the portal will automatically create an Enterprise Application for you. This is the service principal. Once you've created the service principal create a secret. You'll need the Application (Client) ID as well as the secret to setup the service connection in Azure DevOps. Don't forget to assign permissions to the service principal.
After creating the service principal you can create the service connection in Azure Devops. Go to the project settings, service connections and create a new service connection. Select “Azure Resource Manager” and “App Registration or Managed Identity (manual)”. As Credential, set Secret and fill in the Application (Client) ID and the secret, as well as the subscription and tenant id.
Now that we've set up the service principal and service connection we can configure a pipeline that will use the service principal to authenticate to Azure and create a resource group and storage account for the tfstate file. Then we'll use the azureCli to deploy terraform configuration files.
Azure DevOps Pipeline
This is an example of a pipeline that will setup a terraform statefile backend and then deploys your terraform configuration files. Notice the following:
- The service connection in Azure DevOps is called “arm-terraform”
- The storage account and container that are used for the tfstate file are called “saeuwtfdeployment” and “terraform” respectively and are located in the resource group “rg-euw-terraform-deployment”
- All the terraform configuration files are located in the folder “tf/application”
- In the task 'Fetch credentials for Azure' the service principal credentials are fetched and stored in variables that are used in the terraform init command
name: $(Build.DefinitionName)-$(Build.BuildId) appendCommitMessageToRunName: false trigger: none parameters: - name: action displayName: Action type: string default: 'Plan' values: - Initialize remote - Plan - Apply variables: - name: azureDevOpsServiceConnection value: arm-terraform - name: backendResourceGroup value: rg-euw-terraform-deployment - name: backendAzureStorageAccount value: saeuwtfdeployment - name: backendAzureStorageContainer value: terraform - name: backendStateFile value: terraform.tfstate - name: workingDirectory value: $(System.DefaultWorkingDirectory)/tf/application - name: action value: ${{ parameters.action }} - name: subscription value: 30b3c71d-a123-a123-a123-abcd12345678 pool: vmImage: ubuntu-latest stages: - stage: initialize_remote_backend displayName: 'Initialize Remote Backend' condition: eq('${{ parameters.action }}', 'Initialize remote') jobs: - job: create_storage_account_and_container displayName: 'Create SA and Container' steps: - task: AzureCLI@2 displayName: Create SA and Container inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | set -euo pipefail az account set --subscription $subscription az storage account create \ --name $(backendAzureStorageAccount) \ --resource-group $(backendResourceGroup) \ --location westeurope \ --sku Standard_LRS \ --min-tls-version TLS1_2 \ --https-only true \ --allow-blob-public-access false \ --tags team="DevOps Team" company="Getshifting" environment="prd" backup="false" az storage container create \ --name $(backendAzureStorageContainer) \ --account-name $(backendAzureStorageAccount) - stage: terraform_plan_apply displayName: 'Terraform Deploy' condition: or( contains('${{ parameters.action }}', 'Plan'), contains('${{ parameters.action }}', 'Apply')) jobs: - job: terraform_plan_apply displayName: 'Terraform Deploy' steps: - task: AzureCLI@2 displayName: Fetch credentials for Azure inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: bash addSpnToEnvironment: true useGlobalConfig: true scriptLocation: inlineScript inlineScript: | echo "##vso[task.setvariable variable=ARM_TENANT_ID;]$tenantId" echo "##vso[task.setvariable variable=ARM_CLIENT_ID;issecret=true]$servicePrincipalId" echo "##vso[task.setvariable variable=ARM_CLIENT_SECRET;issecret=true]$servicePrincipalKey" - task: AzureCLI@2 displayName: 'Terraform Init' inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | terraform version terraform init \ -backend-config=storage_account_name=$(backendAzureStorageAccount) \ -backend-config=container_name=$(backendAzureStorageContainer) \ -backend-config=key=$(backendStateFile) \ -backend-config=resource_group_name=$(backendResourceGroup) \ -backend-config=subscription_id=$(subscription) \ -backend-config=tenant_id=$(ARM_TENANT_ID) \ -backend-config=client_id=$(ARM_CLIENT_ID) \ -backend-config=client_secret=$(ARM_CLIENT_SECRET) workingDirectory: $(workingDirectory) - task: AzureCLI@2 displayName: 'Terraform Plan' condition: and(succeeded(), eq(variables['action'], 'Plan')) inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | set -euo pipefail az account set --subscription $(subscription) terraform plan \ -var-file=env/prd.tfvars \ -input=false \ -compact-warnings workingDirectory: $(workingDirectory) - task: AzureCLI@2 displayName: 'Terraform Apply' condition: and(succeeded(), eq(variables['action'], 'Apply')) inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | az account set --subscription $(subscription) terraform apply \ -var-file=env/prd.tfvars \ -compact-warnings \ -input=false \ -auto-approve workingDirectory: $(workingDirectory) - task: AzureCLI@2 displayName: 'Terraform Break Lease' condition: eq(variables['Agent.JobStatus'], 'Canceled') inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | # This step will only run if one of the previous steps was canceled. This might leave the state file locked. # This step will break the lease on the state file. # In case this fails, the manual way is to go to the storage account -> Containers -> terraform # -> Select the $(backendStateFile) file -> Break lease set -euo pipefail az account set --subscription $(subscription) # Get storage access key AZURE_STORAGE_KEY=$(az storage account keys list \ --account-name $(backendAzureStorageAccount) \ --resource-group $(backendResourceGroup) \ --query "[0].value" \ --output tsv ) # Break the lease on the state file if it exists az storage blob lease break \ --container-name $(backendAzureStorageContainer) \ --blob-name $(backendStateFile) \ --account-key $AZURE_STORAGE_KEY \ --account-name $(backendAzureStorageAccount) || true workingDirectory: $(workingDirectory)
Federated Service Principal
Federated service principals do not rely on a secret for authentication. This gived the benefit of not having a secret that needs to rotate. The easiest way is to create one using the Azure Devops Service Connection wizard. Just simply go to the project settings, service connections and create a new service connection. Select “Azure Resource Manager” with Identity type “App Registration (automatic)” and Credential “Workload identity federation”. This will create a federated service principal for you with the correct settings. Again, don't forget to assign permissions to the service principal.
Azure DevOps Pipeline
Now the pipeline gets setup a little bit different. Check the pipeline yaml below and notice the following differences:
- The task 'Fetch credentials for Azure' is not needed anymore
- The task 'Terraform Init' is now using the 'addSpnToEnvironment' option directly and has the use_oidc and oidc_token options configured
name: $(Build.DefinitionName)-$(Build.BuildId) appendCommitMessageToRunName: false trigger: none parameters: - name: action displayName: Action type: string default: 'Plan' values: - Initialize remote - Plan - Apply variables: - name: azureDevOpsServiceConnection value: arm-terraform - name: backendResourceGroup value: rg-euw-terraform-deployment - name: backendAzureStorageAccount value: saeuwtfdeployment - name: backendAzureStorageContainer value: terraform - name: backendStateFile value: terraform.tfstate - name: workingDirectory value: $(System.DefaultWorkingDirectory)/tf/application - name: action value: ${{ parameters.action }} - name: subscription value: 30b3c71d-a123-a123-a123-abcd12345678 pool: vmImage: ubuntu-latest stages: - stage: initialize_remote_backend displayName: 'Initialize Remote Backend' condition: eq('${{ parameters.action }}', 'Initialize remote') jobs: - job: create_storage_account_and_container displayName: 'Create SA and Container' steps: - task: AzureCLI@2 displayName: Create SA and Container inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | set -euo pipefail az account set --subscription $subscription az storage account create \ --name $(backendAzureStorageAccount) \ --resource-group $(backendResourceGroup) \ --location westeurope \ --sku Standard_LRS \ --min-tls-version TLS1_2 \ --https-only true \ --allow-blob-public-access false \ --tags team="DevOps Team" company="Getshifting" environment="prd" backup="false" az storage container create \ --name $(backendAzureStorageContainer) \ --account-name $(backendAzureStorageAccount) - stage: terraform_plan_apply displayName: 'Terraform Deploy' condition: or( contains('${{ parameters.action }}', 'Plan'), contains('${{ parameters.action }}', 'Apply')) jobs: - job: terraform_plan_apply displayName: 'Terraform Deploy' steps: - task: AzureCLI@2 displayName: 'Terraform Init' inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' addSpnToEnvironment: true scriptLocation: 'inlineScript' inlineScript: | set -euo pipefail terraform version terraform init \ -backend-config=storage_account_name=$(backendAzureRmStorageAccountName) \ -backend-config=container_name=$(backendAzureRmContainerName) \ -backend-config=key=$(backendAzureRmKey) \ -backend-config=resource_group_name=$(backendAzureRmResourceGroupName) \ -backend-config=subscription_id=$(subscriptionId) \ -backend-config=tenant_id=$tenantId \ -backend-config=client_id=$servicePrincipalId \ -backend-config=use_oidc=true \ -backend-config=oidc_token=$idToken workingDirectory: $(workingDirectory) - task: AzureCLI@2 displayName: 'Terraform Plan' condition: and(succeeded(), eq(variables['action'], 'Plan')) inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | set -euo pipefail az account set --subscription $(subscription) terraform plan \ -var-file=env/prd.tfvars \ -input=false \ -compact-warnings workingDirectory: $(workingDirectory) - task: AzureCLI@2 displayName: 'Terraform Apply' condition: and(succeeded(), eq(variables['action'], 'Apply')) inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | az account set --subscription $(subscription) terraform apply \ -var-file=env/prd.tfvars \ -compact-warnings \ -input=false \ -auto-approve workingDirectory: $(workingDirectory) - task: AzureCLI@2 displayName: 'Terraform Break Lease' condition: eq(variables['Agent.JobStatus'], 'Canceled') inputs: azureSubscription: '$(azureDevOpsServiceConnection)' scriptType: 'bash' scriptLocation: 'inlineScript' inlineScript: | # This step will only run if one of the previous steps was canceled. This might leave the state file locked. # This step will break the lease on the state file. # In case this fails, the manual way is to go to the storage account -> Containers -> terraform # -> Select the $(backendStateFile) file -> Break lease set -euo pipefail az account set --subscription $(subscription) # Get storage access key AZURE_STORAGE_KEY=$(az storage account keys list \ --account-name $(backendAzureStorageAccount) \ --resource-group $(backendResourceGroup) \ --query "[0].value" \ --output tsv ) # Break the lease on the state file if it exists az storage blob lease break \ --container-name $(backendAzureStorageContainer) \ --blob-name $(backendStateFile) \ --account-key $AZURE_STORAGE_KEY \ --account-name $(backendAzureStorageAccount) || true workingDirectory: $(workingDirectory)
Azure DevOps Pipeline Extension
Note: I wrote this part in around 2021 so the information might be outdated. The conclusion however, that you probably shouldn't use the extension in a production environment, still stands. I would suggest to use one of the methods above.
The terraform extension as provided by Microsoft Devlabs configures the backend in the extension itself. This has some limitations so that's why you also need to configure the backend in the terraform configuration file:
terraform { backend "azurerm" { resource_group_name = "rg_terradevops" storage_account_name = "shiftterrastatefile" container_name = "tfstate" key = "storage.tfstate" } } variable "storage_account_name" { type=string default="storageaz400terraform" } variable "resource_group_name" { type=string default="rg_az400_terraform" } provider "azurerm"{ subscription_id = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" tenant_id = "yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy" features {} } resource "azurerm_resource_group" "grp" { name = var.resource_group_name location = "West Europe" } resource "azurerm_storage_account" "store" { name = var.storage_account_name resource_group_name = azurerm_resource_group.grp.name location = azurerm_resource_group.grp.location account_tier = "Standard" account_replication_type = "LRS" }
Now follow the following steps to deploy the resource group and storage account using terraform in Azure DevOps:
- Prepare your Azure subscription fist. For this I'm assuming you've already connected Aure DevOps and your Azure subscription with each other, but you'll also need a container to host your tfstate file as a remote backend. So create a resource group, create a storage account and create a blob container.
- Resource group: rg_terradevops
- Storage account: shiftterrastatefile
- Container: tfstate
- In Azure DevOps, go to the marketplace icon in your terraform project and select “Browse Marketplace”
- Search for and install the terraform extension created by Microsoft Devlabs
- Note that the extension does not get very good reviews and that there might be better options in the market for you, but for simplicity sake we'll keep it to the Microsoft's provided extension. Note that it will probably be unwise to use this extension in a production environment
- Create a (empty) classic pipeline in the same project where you've stored the terraform files
- Add the task “Install Terraform” to the pipeline. Note that by default version 0.12.3 is installed. If you run the pipeline now you can check the output from the task and edit the version field of the task to the latest version
- Add the task “Terraform” to the pipeline. The first task needs to be configured as init
- Display name: terraform init
- Provider: azurerm
- Command: init
- Configuration directory: browse to the folder in your repository that holds the .tf file
- Azure subscription, resource group, storage account, container and key: Setup like configured in the azure portal. Note that “key” is a bit confusing, it is the name of the tfstate file, optionally in a folder.
- Add the task “Terraform” to the pipeline. The second task needs to be configured as plan
- Display name: terraform plan
- Provider: azurerm
- Command: plan
- Configuration directory: browse to the folder in your repository that holds the .tf file
- Azure subscription: Setup like configured in the azure portal.
- Add the task “Terraform” to the pipeline. The third task needs to be configured as apply and validate
- Display name: terraform apply and validate
- Provider: azurerm
- Command: validate and apply
- Configuration directory: browse to the folder in your repository that holds the .tf file
- Azure subscription: Setup like configured in the azure portal.
If you run the pipeline and check the blob container in the azure portal afterwards you'll notice a storage.tfstate file in the container. You can even check the contents using “edit”.
Azure DevOps Pipeline
This would be the pipeline if you'd set it up in yaml:
pool: name: Azure Pipelines steps: - task: ms-devlabs.custom-terraform-tasks.custom-terraform-installer-task.TerraformInstaller@0 displayName: 'Install Terraform 1.0.8' inputs: terraformVersion: 1.0.8 - task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV2@2 displayName: 'Terraform Init' inputs: workingDirectory: terratest backendServiceArm: 'GetShifting Azure subscription (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)' backendAzureRmResourceGroupName: 'rg_terradevops' backendAzureRmStorageAccountName: shiftterrastatefile backendAzureRmContainerName: tfstate backendAzureRmKey: storage.tfstate - task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV2@2 displayName: 'Terraform Plan' inputs: command: plan workingDirectory: terratest environmentServiceNameAzureRM: 'GetShifting Azure subscription (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)' - task: ms-devlabs.custom-terraform-tasks.custom-terraform-release-task.TerraformTaskV2@2 displayName: 'Terraform Validate and Apply' inputs: command: apply workingDirectory: terratest environmentServiceNameAzureRM: 'GetShifting Azure subscription (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)'