When possible, always run the pipeline on a Linux-based agent instead of a Windows-based one. In my experience this can reduce the runtime by up to 50%, depending on the pipeline workload:
pool:
vmImage: "ubuntu-latest"
(ubuntu-latest
is also the default Agent image in Azure DevOps, so if you don't specify anything else this will be used)
This in itself means you should avoid using the VSBuild@1 and VSTest@2 tasks as these can only be run on a Windows-based agent. You should instead use the DotNetCoreCLI@2 task for building/restoring/testing .NET code.
A great way to reduce the total time a build takes is to run multiple smaller jobs in parallel instead of one big job. A job
in Azure DevOps will automatically run on a separate agent, and thus run in parallel, as opposed to a separate step or task that will run sequentially on the same agent. In the following example, Job A & Job B will run at the same time:
jobs:
- job: A
steps:
- bash: echo "A"
- job: B
steps:
- bash: echo "B"
You can find more info about how Jobs work here.
This means you can for example have one job that builds the main source code and another job that runs the tests and a third job that does some kind of analysis, and they can all run simultaneously.
You can combine this with something like a a final "publish" step that utilizes the dependsOn
parameter to make sure it doesn't run until all the other jobs have finished successfully.
The only thing that limits parallelization in this way is more or less if there are dependencies between steps of a specific pipeline that cannot be run in a different job (which runs on a different agent). Keep in mind thought that you can utilize the PublishPipelineArtifact and DownloadPipelineArtifact tasks to publish some kind of result from one job and download it in another.
It is also easier to run more jobs in parallel if your .NET code is using .NET Core (.NET 6/7/8) and not .NET Framework because it is possible to build individual .NET Core projects in the pipeline without having to build everything in a solution, which is not the case for .NET Framework*. This means that if you have let's say Project1.csproj
and TestProject1.csproj
that are both part of MySolution.sln
, you can create two jobs that builds that specific .csproj
file and not the entire solution, and then run them in parallel.
* technically it is possible to build individual projects instead of a entire solution using the VSBuild@1 too, by pointing to the path of a
.csproj
file using the "solution" property. The issue is that this takes an extreme amount of time, often as long as building the entire solution in my experience. I suspect this is because the legacy "VS" tasks are built from the ground-up to work using a solution file, so this way of building is a bit of a hack and does not seem to be officially supported.
Keep in mind that as you increase the number of parallel jobs that are being run you might start getting into issues where there are no available agents because all of them are already busy with other jobs. In this case you can go to Project Settings > Pipelines > Parallel jobs and increase the number there (if you're willing to pay for it). This costs $40 per month per additional Microsoft-hosted agent.
If you are using some kind of tool for static code analysis, such as SonarQube, keep in mind that doing this on a medium to large solution adds a significant amount of time to the build process as well as taking time to run the actual analysis (at least when it comes to SonarQube). Therefore a good way to save time is to reduce this analysis when it is not "required" (based on preferences and/or organizational policies).
One way to achieve this is to create a script like this (this is a PowerShell example):
# SonarQube analysis will be run if any of these are true:
# 1. The runSonarQube parameter is manually set to true.
# 2. The build is for either of these branches: dev, master, Releases/*.
# 3. The build is for a pull request to either of these branches: dev, master, Releases/*.
Param(
[string]$runSonarQubeParameter,
[string]$buildSourceBranch,
[string]$pullRequestTargetBranch
)
$branchRequiresAnalysis = $buildSourceBranch -eq 'dev' -or $buildSourceBranch -eq 'master' -or $buildSourceBranch -like 'Releases/*'
$prTargetRequiresAnalysis = $pullRequestTargetBranch -eq 'dev' -or $pullRequestTargetBranch -eq 'master' -or $pullRequestTargetBranch -like 'Releases/*'
if ($branchRequiresAnalysis) {
Write-Host "Branch requires SonarQube analysis."
}
if ($prTargetRequiresAnalysis) {
Write-Host "Pull request target requires SonarQube analysis."
}
if ($runSonarQubeParameter -eq "True") {
Write-Host "SonarQube analysis is manually requested."
}
$sonarQubeShouldBeRun = $runSonarQubeParameter -eq "True" -or $branchRequiresAnalysis -or $prTargetRequiresAnalysis
if (!$sonarQubeShouldBeRun) {
Write-Host "##[warning] NOTE: SonarQube analysis will be skipped for this build!"
}
# Set "SonarQubeShouldBeRun" variable to be used in rest of the pipeline
# See: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/set-variables-scripts?view=azure-devops&tabs=powershell
Write-Host "##vso[task.setvariable variable=sonarQubeShouldBeRun]$sonarQubeShouldBeRun"
This script can then be called like this:
# This script will set the variable "sonarQubeShouldBeRun" to true or false
- task: PowerShell@2
displayName: Determine if SonarQube analysis should be run
inputs:
targetType: filePath
filePath: build/scripts/Determine-SonarQubeAnalysisShouldBeRun.ps1
arguments: "${{ parameters.runSonarQube }} '$(Build.SourceBranchName)' '$(System.PullRequest.TargetBranchName)'"
and the sonarQubeShouldBeRun
variable can be used to control the analysis steps like this:
- task: SonarQubeAnalyze@5
displayName: "SonarQube: Run analysis"
condition: and(succeeded(), eq(variables.sonarQubeShouldBeRun, true))
This will make sure that the analysis is only run for the main branches that require this analysis as well as any pull requests that target those branches. On top of this it also takes into account a manual flag "runSonarQubeParameter" that can be used when an analysis run is required outside of these situations.
This parameter can be set like this:
parameters:
- name: runSonarQube
type: boolean
default: false
displayName: Run SonarQube analysis
and will show up like this in the UI when queueing a new pipeline build:
Make sure you are not accidentally building a project/solution multiple times - Because of the way that the implicit restore works for dotnet tasks it is very easy to, say, first build a solution with a project and a test project in one step, and then run the tests using the DotNetCoreCLI@2
task, not knowing that this will trigger an additional unnecessary build of that test project.
A way to get around this is to either (a) skip the first build step and simply run the test task as this will also build and restore the project, or (b) keep the separate build task and then call the test task with the --no-build
argument:
- task: DotNetCoreCLI@2
displayName: "π¬ dotnet test"
inputs:
command: "test"
projects: "**/MyTestProject.csproj"
arguments: >
--no-build # <----
The --no-build
flag will skip building the test project before running it, it also implicitly sets the --no-restore flag.
The PublishCodeCoverageResults@1
task in Azure DevOps is used to take already produced code coverage results ("JaCoCo" / "Cobertura" format) and publish it to the pipeline. This makes the code coverage results show up as a tab in the pipeline run summary in Azure DevOps:
The issue is that this task is so incredibly slow that it basically makes it unusable unless the amount of files is very small. This is a known issue and has been reported years ago but not fixed (yet).
An alternative to this stand-alone task you can use if you are running a .NET test task is to specify that code coverage should be collected and published during the test run, like this:
- task: DotNetCoreCLI@2
displayName: "π¬ dotnet test"
inputs:
command: "test"
projects: "**/MyTestProject.csproj"
publishTestResults: true # <----
arguments: >
--collect "Code Coverage" # <----
Note that the default value for the "publishTestResults" parameter is true
and can therefore be skipped. I've explicitly added it here for the sake of clarity.
Publishing the test results directly form the "DotNetCoreCLI@2" task like this is much, much faster and I don't exactly know why. However, the "built-in" code coverage reporting only handles the binary .coverage
format (which is what is produced if you don't specify another format in the --collect
argument). Therefore, if you are instead producing code coverage results with some kind of XML-based format (using "Coverlet" for example), then you need to use the stand-alone publish task.
An alternative to this is to instead produce the code results with the .coverage
format, publish it, and then in a separate task re-format the results to XML. One way to do is to use the dotnet-coverage
tool. Specific info about re-formatting and/or merging reports using this tool can be found here.
Combining both these things could look like this:
- task: DotNetCoreCLI@2
displayName: "π¬ dotnet test"
inputs:
command: "test"
projects: "**/MyTestProject.csproj"
publishTestResults: true
arguments: >
--collect "Code Coverage"
- task: PowerShell@2
displayName: "Install the 'dotnet-coverage' tool"
inputs:
targetType: inline
script: dotnet tool install dotnet-coverage --global --ignore-failed-sources
- script: >
dotnet-coverage merge -o $(Agent.TempDirectory)/coverage.xml -f xml $(Agent.TempDirectory)/*/*.coverage
displayName: "Re-format code coverage file(s) to XML"
This will result in you being able to take advantage of the faster publishing speed of doing it using the DotNetCoreCLI@2
task while also being able to output the code coverage results in a more generic format (for SonarQube for example).
DotNetCoreCLI@2
:
- task: DotNetCoreCLI@2
displayName: "π¬ dotnet test"
inputs:
command: "test"
projects: "**/MyTestProject.csproj"
arguments: >
--collect "Code Coverage" # <----
Note that you can specify the argument like this --collect "Code Coverage;Format=Xml"
to collect the coverage information in an XML format instead of the binary .coverage
format.
VSTest@2
:
- task: VSTest@2
displayName: "π¬ VS Test"
inputs:
testAssemblyVer2: |
Tests/**/MyTestProject.dll
!**/obj/**
platform: "AnyCPU"
configuration: "Release"
codeCoverageEnabled: true # <----
Even though it requires some extra work, it is possible to collect code coverage from multiple parallel jobs, which allows you to significantly improve performance for large solutions with long build times and many tests (see performance-related tips).
- Run all tests in multiple jobs, in each job you need to checkout the code, build the relevant code and then run the tests, make sure you specify that code coverage should be collected. You then need to publish the test results (
.coverage
and.trx
files) so we can download them in the job that is going to run the actual SonarQube analysis:
############################
#### RUN TESTS
############################
# The `--no-build` flag will skip building the test project before running it (since we already built in the previous step)
# It also implicitly sets the --no-restore flag
# https://learn.microsoft.com/en-us/dotnet/core/tools/dotnet-test
- task: DotNetCoreCLI@2
displayName: "π¬ dotnet test"
inputs:
command: "test"
projects: "${{ parameters.testsToRun }}"
testRunTitle: "${{ parameters.testSuite }}"
publishTestResults: true
arguments: >
--configuration Release
--collect "Code Coverage"
--no-build
############################
#### PUBLISH ARTIFACT
############################
# Copy relevant files to a "TestResults" folder
# so we can publish them as an artifact without including the entire TempDirectory
# NOTE: We only look for .coverage files one sub-directory down, because there exists
# duplicates of these files further down, which we do not want to copy.
- task: CopyFiles@2
displayName: "Copy test result files to $(Agent.TempDirectory)/TestResults"
inputs:
SourceFolder: "$(Agent.TempDirectory)"
Contents: |
**/*.trx
*/*.coverage
TargetFolder: "$(Agent.TempDirectory)/TestResults"
flattenFolders: true
- task: PublishPipelineArtifact@1
displayName: "Publish pipeline artifact: ${{ parameters.jobName }}"
inputs:
targetPath: "$(Agent.TempDirectory)/TestResults"
artifactName: ${{ parameters.jobName }}
- You will also need to have one job that prepares the analysis, builds the source code and runs the analysis. It is not possible to split the actual analysis and the preparation/building (more info here).
- In the preparation step you will need to specify the path where SonarQube can find the test result files (
.trx
) and the code coverage files (will be converted from.coverage
to.xml
):
- task: SonarQubePrepare@5
displayName: "SonarQube: Prepare"
inputs:
SonarQube: "SonarQube"
scannerMode: "MSBuild"
projectKey: "PROJECT_KEY"
projectName: "PROJECT_NAME"
extraProperties: |
sonar.cs.vscoveragexml.reportsPaths=$(Agent.TempDirectory)/TestResults/merged.coverage.xml
sonar.cs.vstest.reportsPaths=$(Agent.TempDirectory)/TestResults/*/*.trx
- After we build the source code but before performing the SonarQube analysis we need to download all the test run result artifacts that were published using the
PublishPipelineArtifact@1
task:
- task: DownloadPipelineArtifact@2
displayName: "Download test run artifacts"
inputs:
targetPath: $(Agent.TempDirectory)/TestResults
Note that if the tests take longer to run than building the project then the pipeline will attempt to download the test run artifacts before they are available which will result in the results in SonarQube being incorrect. You can potentially solve this with a delay (or in some more sophisticated way using a script and Azure CLI):
- task: PowerShell@2
displayName: "β³ Delay for 2 minutes to wait for test result artifacts to be available"
inputs:
targetType: inline
script: "Start-Sleep -Seconds 120"
- task: DownloadPipelineArtifact@2
displayName: "Download test run artifacts"
inputs:
targetPath: $(Agent.TempDirectory)/TestResults
- We then convert the code coverage files to XML and merge them into one big file using the
dotnet-coverage
tool (note that we output the final result in the path that we specified in the SonarQube preparation task):
- task: PowerShell@2
displayName: "Install the 'dotnet-coverage' tool"
inputs:
targetType: inline
script: dotnet tool install dotnet-coverage --global --ignore-failed-sources
- script: >
dotnet-coverage merge -o $(Agent.TempDirectory)/TestResults/merged.coverage.xml -f xml -r $(Agent.TempDirectory)/TestResults/*.coverage --remove-input-files
displayName: "Merge and re-format code coverage files to XML"
- The final step we have to do is fix the path information in the code coverage files. This is because when the files are generated, they retain information about the relative path to the source file that is being tested:
Because these files were generated on different agents, the part of the path information that refers to the agent (C:\agent2\_work
) will not be the same as the path for the source files on the agent we downloaded the results to. We have to fix this manually, otherwise SonarQube won't recognize the coverage information as valid. We can do that like this:
- task: PowerShell@2
displayName: "Fix code coverage file paths in merged.coverage.xml"
inputs:
targetType: filePath
filePath: build/scripts/Fix-CodeCoverageFilePaths.ps1
arguments: -pathToCoverageFile "$(Agent.TempDirectory)/TestResults/merged.coverage.xml"
condition: and(succeeded(), eq(variables.sonarQubeShouldBeRun, true))
The script this task uses looks like this:
# When generating code coverage reports, the paths to the source files are stored in the code coverage file.
# Because we run multiple test jobs on different agents in parallel, it leads to the code coverage file paths being different on each agent.
# This script fixes the paths in the code coverage file so that they are correct from the point-of-view of the agent running the code coverage analysis.
Param(
[string]$pathToCoverageFile
)
Write-Host "Fixing file paths in code coverage file '$pathToCoverageFile'..."
$localPath = (Get-Location).Path
$linuxFilePattern = '/home/vsts/work/\d+/s'
$onPremWindowsFilePattern = 'C:\\agent\\_work\\\d+\\s'
(Get-Content -path $pathToCoverageFile -Raw) -replace $linuxFilePattern, $localPath -replace $onPremWindowsFilePattern, $localPath | Set-Content $pathToCoverageFile
- After this we can finally run the analysis step:
- task: SonarQubeAnalyze@5
displayName: "SonarQube: Run analysis"
We should then get both code coverage information and test results into SonarQube:
More info:
There is support for showing code coverage information for Pull Requests in Azure DevOps, if you have it enabled it shows up like this:
To enable this you need to:
-
Add build validation for the target branch so that a new build is run and checked when a new PR is opened
-
In your build pipeline you need to enable gathering of code coverage* from your test runs and publish the results then you will "automatically" get code coverage information in the PR as shown above:
- task: DotNetCoreCLI@2 displayName: "π¬ dotnet test" inputs: command: "test" projects: "**/MyTestProject.csproj" publishTestResults: true # <---- arguments: > --collect "Code Coverage" # <----
* Note that only the binary .coverage
format is currently supported, so you need to make sure you are publishing this format
If you run the dotnet tool install
command in a task on an agent running on a Linux-based OS w/ a project that has multiple project files you might run into issues where the task fails to complete with an error message along the lines of the folder containing multiple project files. I think this is related to the fact that .NET Core CLI will automatically restore any .NET projects in the working directory and does not like if there are multiple of them. The process of installing a new dotnet tool does not require this to happen, so it is ostensibly a bug, but I might be missing something.
One way to get around this issue is to set the "workingDirectory" parameter to an arbitrary folder in the repository that does not contain any .csproj
files at all:
- task: PowerShell@2
displayName: "Install the 'dotnet-coverage' tool"
inputs:
targetType: inline
workingDirectory: "$(Build.SourcesDirectory)/ArbitraryFolder" # <----
script: dotnet tool install dotnet-coverage --global --ignore-failed-sources
If you have a need to publish and download artifacts between different jobs in a pipeline you can use the PublishPipelineArtifact and DownloadPipelineArtifact tasks in Azure DevOps. One thing to keep in mind when doing this is that the "PublishPipelineArtifact" task doesn't flatten folders, i.e. if you download an artifact "MyCoolArtifact1" & "MyCoolArtifact2" with some arbitrary files into "MyFolder", then it will result in the files being put into MyFolder/MyCoolArtifact1
and MyFolder/MyCoolArtifact2
instead of directly into MyFolder/...
.
One way solve this is to first download the pipeline artifacts and then use the "CopyFiles@2" task w/ the flattenFolders
parameter set to true
:
- task: CopyFiles@2
displayName: "Copy test result files to $(Agent.TempDirectory)/TestResults"
inputs:
SourceFolder: "MyFolder"
Contents: "**"
TargetFolder: "MyFolder"
flattenFolders: true # <----
Contents: "**"
copies all files in the specified source folder and all files in all sub-folders. Note that this is the default value, I've explicitly added it here for the sake of clarity.
If you want to use the "CopyFiles@2" task to copy specific file types (like .xml
, .coverage
, .trx
) instead of all files in a specific folder, you need to make sure that you do not write it over multiple lines using single quotes, like this:
- task: CopyFiles@2
inputs:
SourceFolder: "$(Build.SourcesDirectory)"
Contents: |
'**\bin\**\*.dacpac'
'**\PublishProfile\*.publish.xml'
TargetFolder: "$(Build.ArtifactStagingDirectory)"
You instead need to write it like this (without quotes), otherwise the files won't be found:
- task: CopyFiles@2
inputs:
SourceFolder: "$(Build.SourcesDirectory)"
Contents: |
**\bin\**\*.dacpac
**\PublishProfile\*.publish.xml
TargetFolder: "$(Build.ArtifactStagingDirectory)"
More information about this bug can be found in this forum post.
If you are running your pipeline on a self-hosted agent and have tasks that install new software, for example using dotnet tool install
, then a restart of the agent could be required for it to recognize this new tool/software.
I noticed this when I tried installing the
dotnet-coverage
tool and it said it was already installed but at the same time when trying to use it in a task it said it wasn't installed, leading to a catch-22. Restarting the agent solved this issue.
The solution was taken from this forum post.
Note that the "DotNetCoreCLI@2" task puts test results in $(Agent.TempDirectory)
whereas the legacy "VSTest@2" task puts it in $(Agent.TempDirectory)/TestResults
.
This location can be re-configured for the "VSTest@2" using the resultsFolder
parameter.
There are two main SonarQube-related tasks available in Azure DevOps:
From what I've read, seen, and tried, these two tasks HAVE to be run in the same job, otherwise the analysis step fails. I don't know specifically what the prepare step does and there isn't a lot of documentation about that either, but there is some kind of magic that happens behind the scenes. All I know is that part of what happens is that it creates a hidden .sonarqube
folder in the working directory with a bunch of files. I even tried copying the entire working folder to another agent after the prepare step and then running analyze and that still didn't work...
What this means is that you cannot optimize the pipeline to do something along the lines of running one job that prepares the analysis and builds the source code in parallel with test jobs and then end with a job that run the SonarQube analysis. Instead you need to either run everything in one big job, or have one job that prepares the analysis, builds the source code, and then waits to download the test results from separate jobs before running the analysis (this leads to timing issues etc.).
Either way, it is annoying...
This is configured through the "Test execution parameters" in SonarQube and specified in the "SonarQubePrepare@5" task.
For C# it could look like this:
- task: SonarQubePrepare@5
displayName: "SonarQube: Prepare"
inputs:
SonarQube: "SonarQube"
scannerMode: "MSBuild"
projectKey: "${{ parameters.projectName }}"
projectName: "${{ parameters.projectName }}"
extraProperties: |
sonar.cs.vstest.reportsPaths=$(Agent.TempDirectory)/*.trx # <----
If you are using XUnit or NUnit instead of VSTest/MSTest there are alternative report paths for these.
This test result report is what makes this information show up in SonarQube:
Keep in mind that SonarQube only supports certain code coverage formats for certain languages.
For example: The "Cobertura" code coverage format is not supported for C#
, but it is supported for Flex
and Python
. This can become confusing because "Cobertura" shows up as a popular code coverage format in a lot of C# articles etc. So even though it is fully possible to generate this format "out-of-the-box" for C# code, SonarQube won't see it as valid.
Also, the binary .coverage
format that is generated by default when collecting code coverage info in .NET is not supported by SonarQube, but at the same time this is the expected format when publishing test results to the Azure DevOps pipeline. Therefore it is recommended to collect this data in the binary format and then re-format it into a XML format that is compatible w/ SonarQube before running the analysis step.
If you are analyzing .NET code using SonarQube and are using a Windows-based agent, then there seems to be some convention-based magic happening behind the scenes that is good to know about.
This is what is written in SonarQube's documentation:
"[...] when you are using an Azure DevOps Windows image for your build. In these cases, the .NET Framework scanner will automatically find the coverage output generated by the --collect "Code Coverage" parameter without the need for an explicit report path setting. It will also automatically convert the generated report to XML. No further configuration is required."
So the paths to the test results is implicitly set (it relies on them being in $(Agent.TempDirectory)/TestResults
and after that checks a few other "reasonable" places) AND the binary .coverage
format is automatically converted to XML. In my opinion this way of doing it involves way too much "magic" and is just needlessly confusing if you are not using this exact setup...
Either way, if you are not running on a Windows image (which you should avoid for performance reasons) then you need to do this yourself instead.
Converting .coverage
to XML (this was also refenced in this chapter:
- task: DotNetCoreCLI@2
displayName: "π¬ dotnet test"
inputs:
command: "test"
projects: "**/MyTestProject.csproj"
publishTestResults: true
arguments: >
--collect "Code Coverage"
- task: PowerShell@2
displayName: "Install the 'dotnet-coverage' tool"
inputs:
targetType: inline
script: dotnet tool install dotnet-coverage --global --ignore-failed-sources
- script: >
dotnet-coverage merge -o $(Agent.TempDirectory)/coverage.xml -f xml $(Agent.TempDirectory)/*/*.coverage
displayName: "Re-format code coverage file(s) to XML"
Specifying test result paths:
- task: SonarQubePrepare@5
displayName: "SonarQube: Prepare"
inputs:
SonarQube: "SonarQube"
scannerMode: "MSBuild"
projectKey: "PROJECT_KEY"
projectName: "PROJECT_NAME"
extraProperties: |
sonar.cs.vscoveragexml.reportsPaths=$(Agent.TempDirectory)/TestResults/.coverage.xml # <----
sonar.cs.vstest.reportsPaths=$(Agent.TempDirectory)/TestResults/*/*.trx # <----
You can utilize templates in Azure DevOps to define reusable content, logic, and parameters in YAML pipelines.
The way this works is that you can first define some kind of YAML code in one repository, say TemplateRepository
in the Infrastructure
project in Azure DevOps:
TemplateRepository/templates/mytemplate.yml:
parameters:
- name: message
type: string
steps:
- bash: echo ${{ parameters.message }}
You can then use that template like this (you specify the template repository as a resource and then you point to the file you want to use):
resources:
repositories:
- repository: infrastructure # variable name
type: git
name: Infrastructure/TemplateRepository # Project/Repo
ref: refs/heads/main # branch
trigger: none
pool:
vmImage: ubuntu-latest
steps:
- template: templates/mytemplate.yml@infrastructure
parameters:
message: "My cool message"
There is no support for the condition
keyword when using templates, meaning you cannot write something like this:
- template: templates/mytemplate.yml@infrastructure
condition: and(succeeded(), ne(variables['Build.Reason'], 'PullRequest'))
What you CAN do though is define the condition like this:
- ${{ if ne(variables['Build.Reason'], 'PullRequest') }}:
- template: templates/jobs/publish-viedoc-package.yml
and that will work.
If you are using pipeline templates and want to for example use script files from the repository that the template YAML file is checked into, you can accomplish this by checking out multiple repositories in the pipeline. So you are both checking out the repository that is using the template repository as a resource and the template repository itself:
TemplateRepository/templates/mytemplate.yml:
steps:
# checkout: self is implicitly defined for all pipelines, I've adde it here for clarity
- checkout: self
# checkout the infrastructure repo so we can run script files from it
- checkout: infrastructure
One important thing to keep in mind when doing this is that it will change the way the default working directory works. Normally when you checkout a repository the root of that repository will be the working directory, so if you have a repostitory MyRepo
which has a folder MyFolder
with a textfile Test.txt
, then in that pipeline you can use the path MyFolder/Test.txt
to find that file.
But if you are checking out more than one repo then the working directory will be one folder "up", with the root folder of all repositories being inside that folder. For example, you checkout MyRepo
and MyRepo2
. The path to the Test.txt
in MyRepo
will now be MyRepo/MyFolder/Test.txt
instead of just MyFolder/Test.txt
like it was before:
- MyRepo
- MyFolder
- Test.txt
- MyRepo2
- MyFolder2
- Test2.txt
If you are running tests that require a local "Azurite" instance, for example for emulating Azure Storage, then you need a way to duplicate this functionality when running these tests in your CI pipeline.
One way to do that is to add this task:
# Azurite is required for some tests to run as expected
# See: https://learn.microsoft.com/en-us/samples/azure-samples/automated-testing-with-azurite/automated-testing-with-azure/
- bash: |
npm install -g azurite
mkdir azurite
azurite --silent --location azurite &
displayName: "Install and Run Azurite"
You can customize the value of the testRunTitle
parameter for both the DotNetCoreCLI@2 task and the VSTest@2 task.
For example:
- task: DotNetCoreCLI@2
displayName: "π¬ dotnet test"
inputs:
command: "test"
projects: "**/MyTestProject.csproj"
testRunTitle: "Application (Clinic)"
This will make the results that show up in the test results tab in Azure DevOps more readable and clear:
If you have specified a local NuGet feed in a nuget.config
file in the root of your repository, like this:
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<clear />
<add key="Local" value="%USERPROFILE%\.viedoc\local\packages\nuget" /> # <----
<add key="nuget.org" value="https://api.nuget.org/v3/index.json" />
</packageSources>
</configuration>
you will run into issues whenever you try to build any project in this repository in a pipeline. This is because of the implementation of the implicit restore that triggers whenever you build a .csproj
. This restore step will default to using the feed information provided by the nuget.config
file.
What this leads to is that whenever you try to trigger a task that builds or runs tests using your .NET projects, e.g.:
- task: DotNetCoreCLI@2
displayName: "π dotnet build"
inputs:
command: "build"
projects: "**/ProjectToBuild.csproj"
- task: DotNetCoreCLI@2
displayName: "π¬ dotnet test"
inputs:
command: "test"
projects: "**/ProjectToTest.csproj"
then the implicit restore will kick in and try to find the local feed specified in the nuget.config
file, which it is unable to do, so the task fails.
One way to solve this is to separate the restore
, build
and test
steps. This allows you to specify what feed should be used in the restore
step, and then specify that no restore should be performed in the subsequent steps.
We specify the organization feed to use using the vstsFeed
parameter:
- task: DotNetCoreCLI@2
displayName: "β» dotnet restore"
inputs:
command: "restore"
projects: "**/MyProject.csproj"
vstsFeed: "ProjectName/FeedName" # <----
We then use the --no-restore
argument to skip the implicit restore:
- task: DotNetCoreCLI@2
displayName: "π dotnet build"
inputs:
command: "build"
projects: "**/MyProject.csproj"
arguments: >
--no-restore # <----
We also use the --no-build
argument when running the tests:
# The `--no-build` flag will skip building the test project before running it (since we already built in the previous step)
# It also implicitly sets the --no-restore flag
- task: DotNetCoreCLI@2
displayName: "π¬ dotnet test"
inputs:
command: "test"
projects: "**/MyProject.csproj"
arguments: >
--no-build # <----
If you are running a pipeline that is building a solution with a mix of .NET Core & .NET Framework projects then you can run into issues if you run the VSTest
task after that.
This seems to be because the task gets "confused" about what test adapter to use during this run. A way to solve this is to utilize the pathtoCustomTestAdapters
property and point to one of the .NET Framework projects in the solution (it doesn't matter which one):
- task: VSTest@2
displayName: "π¬ VS Test"
inputs:
testAssemblyVer2: |
Tests/**/MyTestProject.dll
!**/obj/**
platform: "AnyCPU"
configuration: "Release"
pathtoCustomTestAdapters: "Tests/MyTestProject/bin/Release/net472/ # <----