IDA (Inspection Data Analyzer) is a repository for running pipelines to analyze data coming from various inspections.
When running locally the endpoint can be reached at https://localhost:8100
Requirements to be met:
- Owner role in S159-Robotics-in-Echo subscription (needed for ManagedIdentity deployment to work).
- Owner status in the app registration for dev, prod and staging environment (needed for the generation of client secret).
- az CLI installed.
- It will be required, for the deployment and injection in the key vault of the postgreSQL connection string, to build a json file from the provided bicepparam file. This will be an automated process, but you need to ensure to have jq, a command-line json processor. If you are using MacOs, you can installed with brew.
- Give the deployment script privileges to run. From root of this repository, run:
chmod +x scripts/automation/deploy.sh
- Prepare the resource group name:
- open the /scripts/automation/infrastructure.bicepparam file.
- change
param environment = 'YourEnvName'
to desire name. Keep in mind that, in the same file, you can change the name of storage accounts, key vault and database if needed. Remember that the names of these resources must be unique.
- Deploy the Azure resources with the bicep files. Run the following commands:
az login
- select the S159 subscription when prompted. If not, run:
az account set -s S159-Robotics-in-Echo
- run
az bicep build-params --file scripts/automation/infrastructure-<env>.bicepparam --outfile scripts/automation/infrastructure-<env>.parameters.json
to generate a json file from the bicepparam file provided. Change '' by the desired environment to deploy. - open
bash scripts/automation/deploy.sh
and change '' inbicepParameterFile
,serverNamejson
,administratorLoginjson
and in the parameters section (line 23), to the desire environment. Default is "dev". For example,bicepParameterFile
is by default 'scripts/automation/infrastructure-dev.bicepparam'. Change dev in the path to prod or staging, as desire. - run
bash scripts/automation/deploy.sh
to deploy the resources. - Note: administrator login password and the connection string for the postgreSQL flexible server would be available in the deployed key vault.
You can populate the previously deployed storage accounts with blob containers as needed, following these steps:
- Open /scripts/automation/modules/blob-container.bicep file.
- Change:
param storageAccountName string = 'YourStorageAccountNameHere'
param containerName string = 'YourContainerNameHere'
Note: the container name should be in lowercase.
- Run the following command:
az deployment group create --resource-group <resource-group-name> --template-file <bicep-file-name>
, changing '' for the already deployed resource group name, and ` for /scripts/automation/modules/blob-container.
- Under /scripts/automation/appRegistration, there are available config files for each one of the environments (dev, staging and prod). Select which one you want to modify, to deploy a new client secret.
- Ensure that
CFG_IDA_CLIENT_ID
is the client ID of the App in which you want to add a new client secret. These values are already pre-filed for IDA app registrations. - You can change
CFG_IDA_SECRET_NAME
by the secret name desired. - Change
CFG_RESOURCE_GROUP
andCFG_VAULT_NAME
for the resource group and respective key vault, in which the secret will be injected. - Grant privileges to 'app-injection-secrets.sh' and run it:
bash scripts/automation/appRegistration/app-injection-secrets.sh
. Follow the instructions prompted in the command line and choose the environment you are deploying (dev, prod or staging).
- Following same logic as for the client secrets (app Registration) in the previous section, modify the names of the storage accounts and the names you want to use for the deployed connection string in the same config files. For example,
CFG_STORAGE_ACCOUNT_NAME_RAW
is the name of the raw storage account andCFG_CONNECTION_STRING_RAW_NAME
would be the displayed name in the key vault for the connection string of the raw storage account. Do the same for anon and vis storage accounts. - Grant privileges to 'blobstorage-injection-connectionstrings.sh' and run it:
bash scripts/automation/appRegistration/blobstorage-injection-connectionstrings.sh
. Follow the instructions prompted in the command line and choose the environment you are deploying (dev, prod or staging)
Our database model is defined in the folder
/api/Models
and we use
Entity Framework Core as an
object-relational mapper (O/RM). When making changes to the model, we also need
to create a new
migration
and apply it to our databases.
dotnet tool install --global dotnet-ef
NB: Make sure you have have fetched the newest code from main and that no-one else is making migrations at the same time as you!
-
Set the environment variable
ASPNETCORE_ENVIRONMENT
toDevelopment
:export ASPNETCORE_ENVIRONMENT=Development
-
Run the following command from
/api
:dotnet ef migrations add AddTableNamePropertyName
add
will make changes to existing files and add 2 new files in/api/Migrations
, which all need to be checked in to git.
- The
your-migration-name-here
is basically a database commit message. Database__ConnectionString
will be fetched from the keyvault when running theadd
command.add
will not update or alter the connected database in any way, but will add a description of the changes that will be applied later- If you for some reason are unhappy with your migration, you can delete it with:
Once removed you can make new changes to the model and then create a new migration with
dotnet ef migrations remove
add
.
Updates to the database structure (applying migrations) are done in Github Actions.
When a pull request contains changes in the /api/Migrations
folder,
a workflow
is triggered to notify that the pull request has database changes.
After the pull request is approved, a user can then trigger the database changes by commenting
/UpdateDatabase
on the pull request.
This will trigger another workflow which updates the database by apploying the new migrations.
By doing migrations this way, we ensure that the commands themselves are scripted, and that the database changes become part of the review process of a pull request.
This is done automatically as part of the promotion workflows (promoteToProduction and promoteToStaging).
In everyday development we use CSharpier to auto-format code on save. Installation procedure is described here. No configuration should be required.
The formatting of the api is defined in the .editorconfig file.
We use dotnet format to format and verify code style in api based on the C# coding conventions.
Dotnet format is included in the .NET6 SDK.
To check the formatting, run the following command in the api folder:
cd api
dotnet format --severity info --verbosity diagnostic --verify-no-changes --exclude ./api/migrations
dotnet format is used to detect naming conventions and other code-related issues. They can be fixed by
dotnet format --severity info