- .NET 8 SDK (see global.json for the currently required version)
-
Install Podman
-
Install dependencies:
brew tap cfergeau/crc
# https://github.com/containers/podman/issues/21064
brew install vfkit
brew install docker-compose
-
Restart your Mac
-
Finish setup in Podman Desktop
-
Check that
Docker Compatility mode
is enabled, see the bottom left corner -
Enable privileged testcontainers-dotnet
echo "ryuk.container.privileged = true" >> $HOME/.testcontainers.properties
- Git
- .NET 8 SDK
- WSL2 (To install, open a PowerShell admin window and run
wsl --install
) - Virtual Machine Platform (Installs with WSL2, see the link above)
-
Install Podman Desktop. .
-
Start Podman Desktop and follow instructions to install Podman.
-
Follow instructions in Podman Desktop to create and start a Podman machine.
-
In Podman Desktop, go to Settings → Resources and run setup for the Compose Extension. This will install docker-compose.
You can run the entire project locally using podman compose
. (This uses docker-compose behind the scenes.)
podman compose up
The following GUI services should now be available:
- WebAPI/SwaggerUI: localhost:7124/swagger
- GraphQl/BananaCakePop: localhost:7215/graphql
- Redis/Insight: localhost:7216
The WebAPI and GraphQl services are behind a nginx proxy, and you can change the number of replicas by setting the scale
property in the docker-compose.yml
file.
If you need do debug the WebApi/GraphQl projects in an IDE, you can alternatively run podman compose
without the WebAPI/GraphQl.
First, create a dotnet user secret for the DB connection string.
dotnet user-secrets set -p .\src\Digdir.Domain.Dialogporten.WebApi\ "Infrastructure:DialogDbConnectionString" "Server=localhost;Port=5432;Database=Dialogporten;User ID=postgres;Password=supersecret;Include Error Detail=True;"
Then run podman compose
without the WebAPI/GraphQl projects.
podman compose -f docker-compose-no-webapi.yml up
This project uses Entity Framework core to manage DB migrations. DB development can either be done through Visual Studios Package Manager Console (PMC) or through the CLI.
Set Digdir.Domain.Dialogporten.Infrastructure as the startup project in Visual Studio's solution explorer, and as the default project in PMC. You are now ready to use EF core tools through PMC. Run the following command for more information:
Get-Help about_EntityFrameworkCore
Install the CLI tool with the following command:
dotnet tool install --global dotnet-ef
You are now ready to use EF core tools through CLI. Run the following command for more information:
dotnet ef --help
Remember to target Digdir.Domain.Dialogporten.Infrastructure
project when running the CLI commands. Either target it through the command using the -p
option, i.e.
dotnet ef migrations add -p .\src\Digdir.Domain.Dialogporten.Infrastructure\ TestMigration
Or change your directory to the infrastructure project and then run the command.
cd .\src\Digdir.Domain.Dialogporten.Infrastructure\
dotnet ef migrations add TestMigration
Besides ordinary unit and integration tests, there are test suites for both functional and non-functional end-to-end tests implemented with K6.
See tests/k6/README.md
for more information.
When RenovateBot updates global.json
or base image versions in Dockerfiles, make sure they match.
The global.json
file should always have the same SDK version as the base image in the Dockerfiles.
This is to ensure that the SDK version used in the local development environment matches the SDK version used in the CI/CD pipeline.
global.json
is used when building the solution in CI/CD.
To generate test tokens, see https://github.com/Altinn/AltinnTestTools. There is a request in the Postman collection for this.
We are able to toggle some external resources in local development. This is done through the appsettings.Development.json
file. The following settings are available:
"LocalDevelopment": {
"UseLocalDevelopmentUser": true,
"UseLocalDevelopmentResourceRegister": true,
"UseLocalDevelopmentOrganizationRegister": true,
"UseLocalDevelopmentNameRegister": true,
"UseLocalDevelopmentAltinnAuthorization": true,
"UseLocalDevelopmentCloudEventBus": true,
"UseLocalDevelopmentCompactJwsGenerator": true,
"DisableCache": true,
"DisableAuth": true,
"UseInMemoryServiceBusTransport": true
}
Toggling these flags will enable/disable the external resources. The DisableAuth
flag, for example, will disable authentication in the WebAPI project. This is useful when debugging the WebAPI project in an IDE. These settings will only be respected in the Development
environment.
During local development, it is natural to tweak configurations. Some of these configurations are meant to be shared through git, such as the endpoint for a new integration that may be used during local development. Other configurations are only meant for a specific debug session or a developer's personal preferences, which should not be shared through git, such as lowering the log level below warning.
The configuration in the appsettings.local.json
file takes precedence over all other configurations and is only loaded in the Development environment. Additionally, it is ignored by git through the .gitignore
file.
If developers need to add configuration that should be shared, they should use appsettings.Development.json
. If the configuration is not meant to be shared, they can create an appsettings.local.json
file to override the desired settings.
Here is an example of enabling debug logging only locally:
// appsettings.local.json
{
"Serilog": {
"WriteTo": [
{
"Name": "Console",
"Args": {
"outputTemplate": "[{Timestamp:HH:mm:ss.fff} {Level:u3}] {Message:lj}{NewLine}{Exception}"
}
}
],
"MinimumLevel": {
"Default": "Debug"
}
}
}
Add the following to the Program.cs
file to load the appsettings.local.json
file:
var builder = WebApplication.CreateBuilder(args);
// or var builder = CoconaApp.CreateBuilder(args);
// or var builder = Host.CreateApplicationBuilder(args);
// or some other builder implementing IHostApplicationBuilder
// Left out for brevity
builder.Configuration
// Add local configuration as the last configuration source to override other configurations
//.AddSomeOtherConfiguration()
.AddLocalConfiguration(builder.Environment);
// Left out for brevity
For pull requests, the title must follow Conventional Commits. The title of the PR will be used as the commit message when squashing/merging the pull request, and the body of the PR will be used as the description.
This title will be used to generate the changelog (using Release Please)
Using fix
will add to "Bug Fixes", feat
will add to "Features". All the others,chore
, ci
, etc., will be ignored. (Example release)
This repository contains code for both infrastructure and applications. Configurations for infrastructure are located in .azure/infrastructure
. Application configuration is in .azure/applications
.
Deployments are done using GitHub Actions
with the following steps:
- Action: Create a pull request.
- Merge: Once the pull request is reviewed and approved, merge it into the
main
branch.
- Trigger: Merging the pull request into
main
. - Action: The code is built and deployed to the test environment.
- Tag: The deployment is tagged with
<version>-<git-sha>
.
- Passive: Release-please creates or updates a release pull request.
- Purpose: This generates a changelog and bumps the version number.
- Merge: Merge the release pull request into the
main
branch.
- Trigger: Merging the release pull request.
- Action:
- Bumps the version number.
- Generates the release and changelog.
- Deployment is tagged with the new
<version>
without<git-sha>
- The new version is built and deployed to the staging environment.
- Action: Perform a dry run towards the production environment to ensure the deployment can proceed without issues.
- Trigger: Approval of the dry run.
- Action: The new version is built and deployed to the production environment.
Release Please is used to create releases, generate changelog and bumping version numbers.
CHANGELOG.md
and version.txt
are automatically updated and should not be changed manually.
This project uses two GitHub dispatch workflows to manage manual deployments: dispatch-apps.yml
and dispatch-infrastructure.yml
. These workflows allow for manual triggers of deployments through GitHub Actions, providing flexibility for deploying specific versions to designated environments.
The dispatch-apps.yml
workflow is responsible for deploying applications. To trigger this workflow:
- Navigate to the Actions tab in the GitHub repository.
- Select the
Dispatch Apps
workflow. - Click on "Run workflow".
- Fill in the required inputs:
- environment: Choose the target environment (
test
,staging
, orprod
). - version: Specify the version to deploy. Could be git tag or a docker-tag published in packages.
- runMigration (optional): Indicate whether to run database migrations (
true
orfalse
).
- environment: Choose the target environment (
This workflow will handle the deployment of applications based on the specified parameters, ensuring that the correct version is deployed to the chosen environment.
The dispatch-infrastructure.yml
workflow is used for deploying infrastructure components. To use this workflow:
- Go to the Actions tab in the GitHub repository.
- Select the
Dispatch Infrastructure
workflow. - Click on "Run workflow".
- Provide the necessary inputs:
- environment: Select the environment you wish to deploy to (
test
,staging
, orprod
). - version: Enter the version to deploy, which should correspond to a git tag.
- environment: Select the environment you wish to deploy to (
This workflow facilitates the deployment of infrastructure to the specified environment, using the version details provided.
Naming conventions for GitHub Actions:
workflow-*.yml
: Reusable workflowsci-cd-*.yml
: Workflows that are triggered by an eventdispatch-*.yml
: Workflows that are dispatchable
The workflow-check-for-changes.yml
workflow uses the tj-actions/changed-files
action to check which files have been altered since last commit or tag. We use this filter to ensure we only deploy backend code or infrastructure if the respective files have been altered.
Infrastructure definitions for the project are located in the .azure/infrastructure
folder. To add new infrastructure components, follow the existing pattern found within this directory. This involves creating new Bicep files or modifying existing ones to define the necessary infrastructure resources.
For example, to add a new storage account, you would:
- Create or update a Bicep file within the
.azure/infrastructure
folder to include the storage account resource definition. - Ensure that the Bicep file is referenced correctly in
.azure/infrastructure/infrastructure.bicep
to be included in the deployment process.
Refer to the existing infrastructure definitions as templates for creating new components.
A few resources need to be created before we can apply the Bicep to create the main resources.
The resources refer to a source key vault
in order to fetch the necessary secrets and store them in the key vault for the environment. An ssh
-key is also necessary for the ssh-jumper
used to access the resources in Azure within the vnet
.
Use the following steps:
-
Ensure a
source key vault
exist for the new environment. Either create a new key vault or use an existing key vault. Currently, two key vaults exist for our environments. One in the test subscription used by Test and Staging, and one in our Production subscription, which Production uses. Ensure you add the necessary secrets that should be used by the new environment. Read here to learn about secret convention Configuration Guide. Ensure also that the key vault has the following enabled:Azure Resource Manager for template deployment
. -
Ensure that a role assignment
Key Vault Secrets User
andContributer
(should be inherited) is added for the service principal used by the GitHub Entra Application. -
Create an SSH key in Azure and discard the private key. We will use the
az cli
to access the virtual machine so storing thessh key
is only a security risk. -
Create a new environment in GitHub and add the following secrets:
AZURE_CLIENT_ID
,AZURE_SOURCE_KEY_VAULT_NAME
,AZURE_SOURCE_KEY_VAULT_RESOURCE_GROUP
,AZURE_SOURCE_KEY_VAULT_SUBSCRIPTION_ID
,AZURE_SUBSCRIPTION_ID
,AZURE_TENANT_ID
andAZURE_SOURCE_KEY_VAULT_SSH_JUMPER_SSH_PUBLIC_KEY
-
Add a new file for the environment
.azure/infrastructure/<env>.bicepparam
.<env>
must match the environment created in GitHub. -
Add the new environment in the
dispatch-infrastructure.yml
list of environments. -
Run the GitHub action
Dispatch infrastructure
with theversion
you want to deploy andenvironment
. All the resources in.azure/infrastructure/main.bicep
should now be created. -
(The GitHub action might need to restart because of a timeout when creating Redis).
There is a ssh-jumper
virtual machine deployed with the infrastructure. This can be used to create a ssh
-tunnel into the vnet
. There are two ways to establish connections:
-
Using
az ssh
commands directly:# Connect to the VNet using: az ssh vm --resource-group dp-be-<env>-rg --vm-name dp-be-<env>-ssh-jumper # Or create an SSH tunnel for specific resources (e.g., PostgreSQL database): az ssh vm -g dp-be-<env>-rg -n dp-be-<env>-ssh-jumper -- -L 5432:<database-host-name>:5432
This example forwards the PostgreSQL default port (5432) to your localhost. Adjust the ports and hostnames as needed for other resources.
You may be prompted to install the ssh extension.
-
Using the forwarding utility script:
See scripts/forward-bash/README.md for a more user-friendly way to establish database connections through SSH.
All application Bicep definitions are located in the .azure/applications
folder. To add a new application, follow the existing pattern found within this directory. This involves creating a new folder for your application under .azure/applications
and adding the necessary Bicep files (main.bicep
and environment-specific parameter files, e.g., test.bicepparam
, staging.bicepparam
).
For example, to add a new application named web-api-new
, you would:
- Create a new folder:
.azure/applications/web-api-new
- Add a
main.bicep
file within this folder to define the application's infrastructure. - Use the appropriate
Bicep
-modules within this file. There is one forContainer apps
which you most likely would use. - Add parameter files for each environment (e.g.,
test.bicepparam
,staging.bicepparam
) to specify environment-specific values.
Refer to the existing applications like web-api-so
and web-api-eu
as templates.
Ensure you have followed the steps in Deploying a new infrastructure environment to have the resources required for the applications.
Use the following steps:
-
From the infrastructure resources created, add the following GitHub secrets in the new environment (this will not be necessary in the future as secrets would be added directly from infrastructure deployment):
AZURE_APP_CONFIGURATION_NAME
,AZURE_APP_INSIGHTS_CONNECTION_STRING
,AZURE_CONTAINER_APP_ENVIRONMENT_NAME
,AZURE_ENVIRONMENT_KEY_VAULT_NAME
,AZURE_REDIS_NAME
,AZURE_RESOURCE_GROUP_NAME
,AZURE_SERVICE_BUS_NAMESPACE_NAME
andAZURE_SLACK_NOTIFIER_FUNCTION_APP_NAME
-
Add new parameter files for the environment in all applications
.azure/applications/*/<env>.bicepparam
-
Run the GitHub action
Dispatch applications
in order to deploy all applications to the new environment. -
To expose the applications through APIM, see Common APIM Guide