Skip to content

Commit

Permalink
Merge pull request #2168 from City-of-Helsinki/develop
Browse files Browse the repository at this point in the history
Merge develop to main (2023-08-09)
  • Loading branch information
mjturt authored Aug 10, 2023
2 parents 8fa52e9 + 01ea28c commit 18fa5ca
Show file tree
Hide file tree
Showing 182 changed files with 11,201 additions and 1,883 deletions.
17 changes: 7 additions & 10 deletions .env.benefit-backend.example
Original file line number Diff line number Diff line change
Expand Up @@ -61,14 +61,18 @@ MEDIA_ROOT=/app/var/media
CSRF_COOKIE_NAME=yjdhcsrftoken
CSRF_TRUSTED_ORIGINS="localhost:3000,localhost:3100"

YRTTI_TIMEOUT=30
YRTTI_BASIC_INFO_PATH=https://yrtti-integration-test.agw.arodevtest.hel.fi/api/BasicInfo
SERVICE_BUS_INFO_PATH=https://ytj-integration-test.agw.arodevtest.hel.fi/api/GetCompany
YRTTI_BASE_URL=https://yrtti-integration-test.agw.arodevtest.hel.fi/api
YRTTI_AUTH_PASSWORD=
YRTTI_AUTH_USERNAME=helsinkilisatest
YRTTI_TIMEOUT=30
YRTTI_SEARCH_LIMIT=10
YRTTI_DISABLE=0

SERVICE_BUS_BASE_URL=https://ytj-integration-test.agw.arodevtest.hel.fi/api
SERVICE_BUS_AUTH_PASSWORD=
SERVICE_BUS_AUTH_USERNAME=helsinkilisatest
SERVICE_BUS_TIMEOUT=30
SERVICE_BUS_SEARCH_LIMIT=10

SEND_AUDIT_LOG=0

Expand Down Expand Up @@ -98,10 +102,3 @@ SENTRY_ENVIRONMENT=local
# for Mailhog inbox
EMAIL_HOST=mailhog
EMAIL_PORT=1025

# Variables for using a S3 compatible disk in local development environment in upcoming staging / production environments
USE_S3=1
S3_ENDPOINT_URL="http://minio:9000"
S3_ACCESS_KEY_ID=minio-root
S3_SECRET_ACCESS_KEY=minio-pass
S3_STORAGE_BUCKET_NAME=local-s3-bucket
65 changes: 65 additions & 0 deletions azure-pipelines/helsinkilisa-review.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
#
# Review pipeline. Run build and deploy for Platta test environments.
# Pipeline runs different tests e.g. unittest and browser tests.
#
# Continuous integration (CI) triggers cause a pipeline to run whenever you push
# an update to the specified branches or you push specified tags.
# only PR trigger pipeline
trigger: none

# Pull request (PR) triggers cause a pipeline to run whenever a pull request is
# opened with one of the specified target branches, or when updates are made to
# such a pull request.
#
# GitHub creates a new ref when a pull request is created. The ref points to a
# merge commit, which is the merged code between the source and target branches
# of the pull request.
#
# Opt out of pull request validation
pr:
# PR target branch
branches:
include:
- develop
paths:
include:
- azure-pipelines/helsinkilisa-review.yml
- backend/docker/benefit.Dockerfile
- backend/benefit/**
- backend/shared/**
- frontend/*
- frontend/benefit/**
- frontend/shared/**
exclude:
- README.md
- backend/kesaseteli/**
- backend/tet/**
- frontend/kesaseteli/**
- frontend/tet/**
- frontend/**/browser-tests
- frontend/**/__tests__

# By default, use self-hosted agents
pool: Default

resources:
repositories:
# Azure DevOps repository
- repository: yjdh-helsinkilisa-pipelines
type: git
# Azure DevOps project/repository
name: yjdh-helsinkilisa/yjdh-helsinkilisa-pipelines

extends:
# Filename in Azure DevOps Repository (note possible -ui or -api)
# Django example: azure-pipelines-PROJECTNAME-api-release.yml
# Drupal example: azure-pipelines-drupal-release.yml
template: azure-pipelines-helsinkilisa-review.yml@yjdh-helsinkilisa-pipelines
# Application build arguments and config map values as key value pairs.
# The values here will override the values defined in the yjdh-benefit-pipelines repository.
# for example
# parameters:
# buildArgs:
# NEXT_PUBLIC_DEBUG: 0
# configMap: # pod environment variables
# DEBUG: 0
28 changes: 6 additions & 22 deletions backend/benefit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,6 @@ Set default permissions
This creates permissions for the handler's group so they have access to the Terms in
the django admin.


### Configure docker environment

In the yjdh project root, set up the .env.benefit-backend file: `cp .env.benefit-backend.example .env.benefit-backend`
Expand Down Expand Up @@ -86,6 +85,7 @@ The project is now running at [localhost:8000](https://localhost:8000)
### Updating translations

In `backend/benefit/`:

- Run `python manage.py makemessages --no-location -l fi -l sv -l en`
- Run `python manage.py compilemessages`

Expand All @@ -98,7 +98,7 @@ DUMMY_COMPANY_FORM_CODE can be set to test with different company_form parameter
To seed the database with some mock application data, run `python manage.py seed`
, which by default generates 10 applications for each of the seven possible application statuses and one attachment with a .pdf-file for each of them. To generate more applications, use the optional `--number` flag, for example, running `python manage.py seed --number=30` creates 30 applications of each status. **Note that running the command deletes all previous application data from the database and clears the media folder.**

[Mailhog](https://github.com/mailhog) is available for the local development environment (localhost:8025)[http://localhost:8025/] for previewing
[Mailhog](https://github.com/mailhog) is available for the local development environment (localhost:8025)[http://localhost:8025/] for previewing
and testing the emails sent by the application after setting the `EMAIL_HOST` and `EMAIL_PORT` as in the `.env.benefit-backend.example`.

**Using LOAD_FIXTURES=1 is recommended for local testing** as it loads e.g. default
Expand Down Expand Up @@ -176,7 +176,7 @@ and redoc documentation at [https://localhost:8000/api_docs/redoc/](https://loca

## Scheduled jobs

Jobs can be scheduled using the Django extensions-package and setting the jobs to run as a cronjob.
Jobs can be scheduled using the Django extensions-package and setting the jobs to run as a cronjob.
Currently configured jobs (registered in the `applications/jobs`-directory):

- Daily: check applications that have been in the cancelled state for 30 or more days and delete them.
Expand Down Expand Up @@ -208,24 +208,8 @@ env variables / settings are provided by Azure blob storage:
- `AZURE_ACCOUNT_KEY`
- `AZURE_CONTAINER`

An AWS S3 compatible disk storage can be configured with the following environment variables.

- `USE_S3`
- `S3_ENDPOINT_URL`
- `S3_ACCESS_KEY_ID`
- `S3_SECRET_ACCESS_KEY`
- `S3_STORAGE_BUCKET_NAME`

[MinIO](https://min.io/) can been configured to work as the AWS S3 compatible file storage on the local development environment. The MinIO admin panel can be accessed at (localhost:9090)[http://localhost:9090/].
See `.env.benefit-backend.example` for the Minio variables and credentials.

**Note**
As tests freeze time with [freezegun](https://github.com/spulec/freezegun), the MinIO requests fail when running tests with exception `botocore.exceptions.ClientError: An error occurred (RequestTimeTooSkewed) when calling the PutObject operation: The difference between the request time and the server's time is too large.`
Switch from MinIO to the local disk by setting `USE_S3=0` before running the pytest tests. For now, the only workaround to run tests with S3 enabled is to set host machine date to the date that is used in tests: `2021-06-04 00:00:00 (UTC)`.


## Sentry error monitoring
The `local`, `development` and `testing` environments are connected to the Sentry instance at [`https://sentry.test.hel.ninja/`](https://sentry.test.hel.ninja/) under the `yjdh-benefit`-team.
There are separate Sentry projects for the Django api (`yjdh-benefit-api`), handler UI (`yjdh-benefit-handler`) and applicant UI (`yjdh-benefit-applicant`).

The `local`, `development` and `testing` environments are connected to the Sentry instance at [`https://sentry.test.hel.ninja/`](https://sentry.test.hel.ninja/) under the `yjdh-benefit`-team.
There are separate Sentry projects for the Django api (`yjdh-benefit-api`), handler UI (`yjdh-benefit-handler`) and applicant UI (`yjdh-benefit-applicant`).
To limit the amount of possibly sensitive data sent to Sentry, the same configuration as in kesaseteli is used by default, see [`https://github.com/City-of-Helsinki/yjdh/pull/779`](https://github.com/City-of-Helsinki/yjdh/pull/779).
3 changes: 2 additions & 1 deletion backend/benefit/applications/admin.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,10 +45,11 @@ class ApplicationAdmin(admin.ModelAdmin):
AttachmentInline,
CalculationInline,
)
list_filter = ("status", "company")
list_filter = ("status", "application_origin", "company")
list_display = (
"id",
"status",
"application_origin",
"application_number",
"company_name",
"company_contact_person_email",
Expand Down
58 changes: 39 additions & 19 deletions backend/benefit/applications/api/v1/application_batch_views.py
Original file line number Diff line number Diff line change
Expand Up @@ -191,7 +191,7 @@ def assign_applications(self, request):
def create_application_batch_by_ids(app_status, apps):
if apps:
batch = ApplicationBatch.objects.create(
proposal_for_decision=app_status
proposal_for_decision=app_status, handler=request.user
)
return batch

Expand All @@ -209,16 +209,23 @@ def create_application_batch_by_ids(app_status, apps):
)

# Try finding an existing batch
batch = (
ApplicationBatch.objects.filter(
status=ApplicationBatchStatus.DRAFT, proposal_for_decision=app_status
).first()
) or create_application_batch_by_ids(
app_status,
apps,
)
try:
batch = (
ApplicationBatch.objects.filter(
status=ApplicationBatchStatus.DRAFT,
proposal_for_decision=app_status,
).first()
) or create_application_batch_by_ids(
app_status,
apps,
)
except BatchTooManyDraftsError:
return Response(
{"errorKey": "batchInvalidDraftAlreadyExists"},
status=status.HTTP_400_BAD_REQUEST,
)

if batch:
if batch and batch.status == ApplicationBatchStatus.DRAFT:
apps.update(batch=batch)
batch = ApplicationBatchSerializer(batch)
return Response(batch.data, status=status.HTTP_200_OK)
Expand All @@ -239,16 +246,22 @@ def deassign_applications(self, request, pk=None):
application_ids = request.data.get("application_ids")
batch = self.get_batch(pk)

apps = Application.objects.filter(
deassign_apps = Application.objects.filter(
batch=batch,
pk__in=application_ids,
status__in=[ApplicationStatus.ACCEPTED, ApplicationStatus.REJECTED],
batch=batch,
)
if apps:
for app in apps:
if deassign_apps:
for app in deassign_apps:
app.batch = None
app.save()
return Response(status=status.HTTP_200_OK)
remaining_apps = Application.objects.filter(batch=batch)
if len(remaining_apps) == 0:
batch.delete()
return Response(
{"remainingApps": len(remaining_apps)}, status=status.HTTP_200_OK
)

return Response(
{"detail": "Applications were not applicable to be detached."},
status=status.HTTP_404_NOT_FOUND,
Expand All @@ -264,23 +277,29 @@ def status(self, request, pk=None):
batch = self.get_batch(pk)
if new_status not in [
ApplicationBatchStatus.DRAFT,
ApplicationBatchStatus.AHJO_REPORT_CREATED,
ApplicationBatchStatus.AWAITING_AHJO_DECISION,
ApplicationBatchStatus.DECIDED_ACCEPTED,
ApplicationBatchStatus.DECIDED_REJECTED,
ApplicationBatchStatus.SENT_TO_TALPA,
]:
return Response(status=status.HTTP_400_BAD_REQUEST)

if new_status in [
ApplicationBatchStatus.DECIDED_ACCEPTED,
ApplicationBatchStatus.DECIDED_REJECTED,
]:
# Archive all applications if this batch will be completed
Application.objects.filter(batch=batch).update(archived=True)

# Patch all required fields for batch completion
# Patch all required fields after batch inspection
for key in request.data:
setattr(batch, key, request.data.get(key))

if new_status in [
ApplicationBatchStatus.SENT_TO_TALPA,
]:
# Archive all applications if this batch will be completed
Application.objects.filter(batch=batch).update(archived=True)

previous_status = batch.status
batch.status = new_status

try:
Expand Down Expand Up @@ -309,6 +328,7 @@ def status(self, request, pk=None):
{
"id": batch.id,
"status": batch.status,
"previousStatus": previous_status,
"decision": batch.proposal_for_decision,
},
status=status.HTTP_200_OK,
Expand Down
Loading

0 comments on commit 18fa5ca

Please sign in to comment.