Continue on Error Task Group Vsts
In this post, I will show how to run Azure Az module scripts as tasks in an Azure DevOps pipeline. Working examples can be found in my GitHub AzireFirewall/DevSecOps repo which is the content for my DevSecOps articles.
Why Use A Script?
One can use simple deployment tasks in a DevOps pipeline:
- A task that runs a deployment
- A simple PowerShell/Azure CLI task that runs an inline script
But you might want something that does more. For example, you might want to do some error checking. Or maybe you are going to use a custom container (Azure Container Registry) and execute complex tasks from it. In my case, I wanted to do lots of error checking and give myself the ability to wrap scripts around my deployments.
The First Obstacle: Documentation
Azure DevOps documentation is notorious for being:
- Bad
- Out of date
- Hard to find
- Incomplete
The article you needto get started on using PowerShell can be found here. There is a Hello World example that shows how to pass in two parameters to a PowerShell script. I used that as the basis of my deployment – but it is not enough! I will fix that here.
The Second Obstacle: Examples
The DevOps world is very much a closed box. There's lots of people doing stuff, but findingworking examples is a nightmare. Once again, I will fix that here. The goal is to:
- Store your code in an Azure DevOps repo
- Create an Azure DevOps pipeline to deploy that code to Azure. It will authorised against the Azure subscription (or resource groups) using an App Registration that is stored in DevOps as a Service Connection.
- The pipeline will execute a PowerShell script to deploy code from the DevOps repo into your subscription (or resource groups).
My Example
For this post, I will use the hub deployment from my GitHub AzireFirewall/DevSecOps repo – this deploys a VNet-based (legacy) hub in an Azure hub & spoke architecture.. There are a number of things you are going to need.
Afterwards, I will explain how the pipeline calls the PowerShell script.
DevOps Repo
Set up a repository in Azure DevOps. Copy the required files into the repo. In my example, there are two folders:
- platform: This contains the files to deploy the hub in an Azure subscription. In my example, you will find bicep files with JSON parameter files.
- scripts: This folder contains scripts used in the deployment. In my example, deploy.ps1 is a generic script that will deploy an ARM/Bicep template to a selected subscription/resource group.
- .pipelines: This contains the files to deploy the code. In my example, you will find a YAML file for a DevOps pipeline called hub.yaml that will execute the script, deploy.ps1.
- .github/workflows: This is where you will find YAML files that create workflows in GitHub actions. Any valid file will automatically create a workflow when there is a sucessful merge. My example contains hub.yaml to execute the script, deploy.ps1.
You can upload the files into the repo or sync using Git/VS Code.
Azure AD App Registration (Service Principal or SPN)
You will require an App Registration; this will be used by the Azure DevOps pipeline to gain authorised access to the Azure subscription.
Create an App Registation in Azure AD. Create a secret and store that secret (Azure Key Vault is a good location) because you will not be able to see the secret after creation. Grant the App Registration Owner rights to the Azure subscription (as in my example) or to the resource groups if you prefer that sort of deployment.
Service Connection
In your Azure DevOps project, browse to Project Settings > Service Connections. Create a new Service Connection of the Azure Resource Manager type. Select Service Principal (Manual) and enter the required details:
- Subscription ID: The ID of the subscription that will be the default subscription when the Service Principal signs in.
- Subscription Name: The name of the subscription that will be the default subscription when the Service Principal signs in.
- Service Principal ID: The Client ID of the App Registration (see it's Overview page).
- Service Principal Key: The secret of the App Registration that you should have saved.
- Tenant ID: The ID of the Azure AD tenant. You can get this from the Overview of the App Registration.
- Service Connection Name: The name of the Service Connection; a naming standard helps here. For example, I name it after the scope of the deployment (the target subscription name). Remember this name because it will be used by the YAML file (a value called
azureSubscription) to create the pipeline. In my example, the service connection is called "hub".
Hit Verify And Save – DevOps will verify that the Service Principal can sign in. You should double-check that it has rights over your desired scope in Azure (subscription in my example).
Create the Pipeline
In my example, the hard work is done. A YAML file defines the pipeline; you just need to import the YAML file to create the pipeline.
Go back to your DevOps project, browse to Pipelines, and click New Pipeline. Choose Azure Repos Git as the code location. Select your repo and choose Existing Azure Pipelines YAML File. Use the dropdown list box to select the YAML file – /.pipelines/devops-hub.yml in my case. Save the pipeline and it will run. If you go into the running job you should see a prompt, asking you to authorise the pipeline to use the "hub" service connection.
The pipeline will execute a task, that in turn, will run a PowerShell script. That PowerShell script takes in several parameters that tell the script what to deploy (bicep and parameter files), where to deploy it from (the temporary location where the files are downloaded into the pipeline container), and where to deploy it to (the subscription/resource group).
Executing A PowerShell Script
A pipeline has a section called steps; in here, you create a task for each job that you want to run. For example, I can execute an Azure CLI task, a PowerShell task that runs one/a few lines of inline code, or a PowerShell task that executes a PowerShell script from a file. It's that last one that is interesting.
I can create a PowerShell script that does lots of cool things and store it in my repo. That script can be edited and managed by change control (pull requests) just like my code that I'm deploying. There is an example of this below:
steps: # Deploy Hub - task: AzurePowerShell@5 inputs: azureSubscription: $(azureServiceConnection) ScriptType: 'FilePath' ScriptPath: $(System.DefaultWorkingDirectory)/.pipelines/deploy.ps1 ScriptArguments: > # Use this to avoid newline characters in multiline string -subscriptionId $(hubSub) -resourceGroupName $(hubNetworkResourceGroupName) -location $(location) -deployment "hub" -templateFile $(System.DefaultWorkingDirectory)"/platform/hub.bicep" -templateParameterFile $(System.DefaultWorkingDirectory)"/platform/hub-parameters.json" azurePowerShellVersion: 'LatestVersion' displayName: 'Deploy Hub'
The tasks is running "PowerShell v5" (see AzurePowerShell@5). That's an important thing to note. The Microsoft documentation for running PowerShell shows PowerShell v2, and that does not support the Az modules, which is pretty pointless! PowerShell v4 added the Az modules.
The azureSubscription value refers to the Service Connection that we created earlier, authorising the pipeline against the desired target scope.
ScriptType is set to FilePath (not inline) so I can run a PowerShell script from a file. That requires me to use ScriptPath to define where the script is.
When the pipeline runs, it is executed in a container (defined earlier in the YAML file as ubuntu-latest in my example, Linux to speed things up). The files in the repo are downloaded to a working folder. That location is saved as $(System.DefaultWorkingDirectory). I can then add the relative location of the PowerShell script from the repo ( /.pipelines/deploy.ps1 ) to that path so the pipeline can find the script in the container.
My script is pretty generic. I can have:
- Multiple Bicep files/JSON parameter files
- Multiple target scopes
I can create a PowerShell task for each deployment and use the parameters to specialise the execution of the script.
We wrap up the task by specifying the PowerShell version to use and the display name for the task in the DevOps job console.
The PowerShell Script
The full code for the script can be found here. I'm going to focus on a few little things:
The Parameters
You can see in the above example that I passed in several parameters:
- subscriptionId: The ID of the subscription to deploy the code to. This does not have to be the same as the default subscription specified in the Service Connction. The Service Principal used by the pipeline must have the required permissions in this subcsription.
- resourceGroupName: The name of the resource group that the deployment will go into. My script will create the resource group if required.
- location: The Azure region of the resource group.
- deploymentName: The name of the ARM deployment that will be created in the resource group for the deployment (remember that Bicep deployments become ARM deployments).
- templateFile: The path to the template file in the pipeline container.
- templateParameterFile: The path to the parameter file for the template in the pipeline container.
Each of those parameters is identically named in param () at the start of the PowerShell script and those values specialise the execution of the generic script.
Outputs
You can use Write-Host to output a value from the script to appear in the console of the running job. If you add -ForegroundColor then you can make certain messages, such as errors or warnings, stand out.
Beware of Manual Inputs
Some PowerShell commands might want a manual input. This is not supported in a pipeline and will terminate the pipeline with an error. Test for this happening and use code logic wrapped around your cmdlets to prevent it from happening – this is why a file-based script is better than a simple/short inline script, even to handle a situation like creating a resource group.
Try/Catch
Error handling is a big deal in a hands-off script. You will find that 90% of my script is checking for things and dealing with unwanted scenarios that can happen. A simple example is a resource group.
An ARM deployment (remember this includes Bicep) must go into a resource group. You can just go ahead and write the one-liner to create a resource group. But what happens when you update the code, the script re-runs and sees the resource group is already there? In that scenario, a manual input will appear (and fail the pipeline) to confirm that you want to continue. So I have an elaborate test/to process:
if (!(Get-AzResourceGroup $resourceGroupName -ErrorAction SilentlyContinue)) { try { # The resource group does not exist so create it Write-Host "Creating the $resourceGroupName resource group" New-AzResourceGroup -Name $resourceGroupName -Location $location -ErrorAction SilentlyContinue } catch { # There was an error creating the resoruce group Write-Host "There was an error creaating the $resourceGroupName resource group" -ForegroundColor Red Break } } else { # The resoruce group already exists so there is nothing to do Write-Host "The $resourceGroupName resource group already exists" }
Conclusion
Once you know how to do it, executing a script in your pipeline is easy. Then your PowerShell knowledge can take over and your deployments can become more flexible and more powerful. My example executes ARM/Bicep deployments. Yours could do a PowerShell deployment, add scripted configurations to a template deployment, or even run another language like Terraform. The real thing to understand is that now you have a larger scripting toolset available to your automated deployments.
Source: https://aidanfinn.com/?p=22567
0 Response to "Continue on Error Task Group Vsts"
Enviar um comentário