Skip to content

Commit

Permalink
yml to yaml (#3094)
Browse files Browse the repository at this point in the history
* yml to yaml

* more yml to yaml

* yml to yaml in tests

---------

Co-authored-by: Josh Nygaard <[email protected]>
  • Loading branch information
JNygaard-Skylight and joshnygaard authored Jan 3, 2025
1 parent f402906 commit 5cccf7e
Show file tree
Hide file tree
Showing 37 changed files with 65 additions and 54 deletions.
2 changes: 1 addition & 1 deletion .gitattributes
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,6 @@
*.sh text eol=lf
*.md text eol=lf
*.json text eol=lf
*.yml text eol=lf
*.yaml text eol=lf
*.csv text eol=lf

File renamed without changes.
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion containers/ecr-viewer/.dockerignore
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.next
.github
.env
docker-compose.yml
docker-compose.yaml
Dockerfile
node_modules
seed-scripts
Expand Down
2 changes: 1 addition & 1 deletion containers/ecr-viewer/Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ help:
@echo "\033[1;36mmake lint\033[0m \033[0;33m- Runs next lint\033[0m"
@echo "\033[1;36mmake test\033[0m \033[0;33m- Runs TZ=America/New_York jest\033[0m"
@echo "\033[1;36mmake test-watch\033[0m \033[0;33m- Runs TZ=America/New_York jest --watch\033[0m"
@echo "\033[1;36mmake convert-seed-data\033[0m \033[0;33m- Runs docker compose -f ./seed-scripts/docker-compose.yml up --abort-on-container-exit\033[0m"
@echo "\033[1;36mmake convert-seed-data\033[0m \033[0;33m- Runs docker compose -f ./seed-scripts/docker-compose.yaml up --abort-on-container-exit\033[0m"
@echo "\033[1;36mmake cypress-open\033[0m \033[0;33m- Opens Cypress\033[0m"
@echo "\033[1;36mmake cypress-run\033[0m \033[0;33m- Runs Cypress tests\033[0m"
@echo "\033[1;36mmake cypress-run-local\033[0m \033[0;33m- Runs Cypress tests in local environment\033[0m"
Expand Down
2 changes: 1 addition & 1 deletion containers/ecr-viewer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ The eCR Viewer is primailly deveoped on Mac silicon machines, See this [integrea

Sample eICRs are included in `containers/ecr-viewer/seed-scripts/baseECR/`. If you ever need to update the eCRs or add new eCRs you can regenerate the data by:

1. Delete the current volume used by your DB: `docker compose -f ./docker-compose.yml --profile "*" down -v`
1. Delete the current volume used by your DB: `docker compose -f ./docker-compose.yaml --profile "*" down -v`
2. Run `npm run convert-seed-data` to re-run the FHIR conversion of the seed eCRs
3. Run `npm run local-dev` to re-run the eCR Viewer with the newly converted data.

Expand Down
2 changes: 1 addition & 1 deletion containers/ecr-viewer/design-review/design-review.sh
Original file line number Diff line number Diff line change
Expand Up @@ -122,4 +122,4 @@ open $URL

# Prompt to end review session
read -p "Press enter to end review"
docker compose -f ./docker-compose.yml --profile "*" down
docker compose -f ./docker-compose.yaml --profile "*" down
File renamed without changes.
16 changes: 8 additions & 8 deletions containers/ecr-viewer/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,22 @@
"private": true,
"scripts": {
"dev": "next dev",
"local-dev": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./docker-compose.yml --profile ${CONFIG_NAME} up -d && npm run dev'",
"local-docker": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./docker-compose.yml --profile ${CONFIG_NAME} --profile ecr-viewer up'",
"local-docker:build": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./docker-compose.yml --profile ${CONFIG_NAME} --profile ecr-viewer up --build'",
"local-dev": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./docker-compose.yaml --profile ${CONFIG_NAME} up -d && npm run dev'",
"local-docker": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./docker-compose.yaml --profile ${CONFIG_NAME} --profile ecr-viewer up'",
"local-docker:build": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./docker-compose.yaml --profile ${CONFIG_NAME} --profile ecr-viewer up --build'",
"setup-local-env": "./setup-env.sh",
"build": "next build",
"start": "next start",
"lint": "next lint",
"test": "TZ=America/New_York jest",
"test:watch": "TZ=America/New_York jest --watch",
"clear-local": "docker compose -f ./seed-scripts/docker-compose-seed.yml --profile \"*\" down -v",
"convert-seed-data": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./seed-scripts/docker-compose-seed.yml --profile ${CONFIG_NAME} --profile ecr-viewer up --abort-on-container-exit'",
"convert-seed-data:build": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./seed-scripts/docker-compose-seed.yml --profile ${CONFIG_NAME} --profile ecr-viewer up --abort-on-container-exit --build'",
"update-cypress-data": "docker compose -f ./seed-scripts/docker-compose-create-sql.yml up --build --abort-on-container-exit",
"clear-local": "docker compose -f ./seed-scripts/docker-compose-seed.yaml --profile \"*\" down -v",
"convert-seed-data": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./seed-scripts/docker-compose-seed.yaml --profile ${CONFIG_NAME} --profile ecr-viewer up --abort-on-container-exit'",
"convert-seed-data:build": "npx @dotenvx/dotenvx run -f .env.local -- sh -c 'docker compose -f ./seed-scripts/docker-compose-seed.yaml --profile ${CONFIG_NAME} --profile ecr-viewer up --abort-on-container-exit --build'",
"update-cypress-data": "docker compose -f ./seed-scripts/docker-compose-create-sql.yaml up --build --abort-on-container-exit",
"cypress:open": "cypress open",
"cypress:run": "cypress run",
"cypress:run-local": "docker compose -f cypress/docker-compose.yml --env-file .env.test up postgres -d && concurrently --kill-others 'npm run dev' 'npx wait-on http://localhost:3000 && NODE_ENV=dev cypress run ; docker compose down'",
"cypress:run-local": "docker compose -f cypress/docker-compose.yaml --env-file .env.test up postgres -d && concurrently --kill-others 'npm run dev' 'npx wait-on http://localhost:3000 && NODE_ENV=dev cypress run ; docker compose down'",
"cypress:run-prod": "NODE_ENV=production cypress run"
},
"dependencies": {
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: ecr-viewer

include:
- ../docker-compose.yml
- ../docker-compose.yaml
services:
fhir-converter-service:
platform: linux/amd64
Expand Down
2 changes: 1 addition & 1 deletion containers/ecr-viewer/src/app/api/utils.ts
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ export const AZURE_SOURCE = "azure";
* @returns An object representing the path mappings defined in the YAML configuration file.
*/
export function loadYamlConfig(): PathMappings {
const filePath = path.join(process.cwd(), "src/app/api/fhirPath.yml");
const filePath = path.join(process.cwd(), "src/app/api/fhirPath.yaml");
const fileContents = fs.readFileSync(filePath, "utf8");
return <PathMappings>yaml.load(fileContents);
}
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ describe("Active Problems Table", () => {
let container: HTMLElement;
beforeEach(() => {
const fhirPathFile = fs
.readFileSync("./src/app/api/fhirPath.yml", "utf8")
.readFileSync("./src/app/api/fhirPath.yaml", "utf8")
.toString();
const fhirPathMappings = YAML.load(fhirPathFile) as PathMappings;

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ describe("Snapshot test for Procedures (Treatment Details)", () => {
] as unknown as Procedure[];

const fhirPathFile = fs
.readFileSync("./src/app/api/fhirPath.yml", "utf8")
.readFileSync("./src/app/api/fhirPath.yaml", "utf8")
.toString();
const mappings = YAML.load(fhirPathFile) as PathMappings;
const treatmentData = [
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ describe("Immunizations Table", () => {
let container: HTMLElement;
beforeAll(() => {
const fhirPathFile = fs
.readFileSync("./src/app/api/fhirPath.yml", "utf8")
.readFileSync("./src/app/api/fhirPath.yaml", "utf8")
.toString();
const fhirPathMappings = YAML.load(fhirPathFile) as PathMappings;

Expand Down
2 changes: 1 addition & 1 deletion containers/fhir-converter/tests/integration/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
def setup(request):
print("Setting up tests...")
compose_path = os.path.join(os.path.dirname(__file__), "../..")
compose_file_name = "docker-compose.yml"
compose_file_name = "docker-compose.yaml"
fhir_converter = DockerCompose(
compose_path,
compose_file_name=compose_file_name,
Expand Down
2 changes: 1 addition & 1 deletion containers/message-refiner/tests/integration/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
def setup(request):
print("Setting up tests...")
path = Path(__file__).resolve().parent.parent.parent
compose_file_name = os.path.join(path, "docker-compose.yml")
compose_file_name = os.path.join(path, "docker-compose.yaml")
orchestration_service = DockerCompose(path, compose_file_name=compose_file_name)

orchestration_service.start()
Expand Down
2 changes: 1 addition & 1 deletion containers/orchestration/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@ RUN apt-get update && apt-get install -y curl

EXPOSE 8080
RUN export OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true
CMD opentelemetry-instrument --service_name dibbs-orchestration uvicorn app.main:app --host 0.0.0.0 --port 8080 --log-config app/log_config.yml
CMD opentelemetry-instrument --service_name dibbs-orchestration uvicorn app.main:app --host 0.0.0.0 --port 8080 --log-config app/log_config.yaml
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
services:
postgres:
extends:
file: ../ecr-viewer/docker-compose.yml
file: ../ecr-viewer/docker-compose.yaml
service: postgres
profiles: ["", "azure", "aws"]
sqlserver:
extends:
file: ../ecr-viewer/docker-compose.yml
file: ../ecr-viewer/docker-compose.yaml
service: sqlserver
profiles: ["sqlserver"]
aws-storage:
extends:
file: ../ecr-viewer/docker-compose.yml
file: ../ecr-viewer/docker-compose.yaml
service: aws-storage
profiles: ["aws"]
azure-storage:
extends:
file: ../ecr-viewer/docker-compose.yml
file: ../ecr-viewer/docker-compose.yaml
service: azure-storage
profiles: ["", "azure"]
ecr-viewer:
extends:
file: ../ecr-viewer/docker-compose.yml
file: ../ecr-viewer/docker-compose.yaml
service: ecr-viewer
profiles: ["", "azure", "aws", "sqlserver"]
environment:
Expand Down Expand Up @@ -104,7 +104,7 @@ services:
ports:
- "9090:9090"
volumes:
- "./prometheus.yml:/etc/prometheus/prometheus.yml"
- "./prometheus.yaml:/etc/prometheus/prometheus.yaml"
- "prom_data:/prometheus"
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
Expand All @@ -127,7 +127,7 @@ services:
volumes:
- ./grafana.ini:/etc/grafana/grafana.ini
- ./grafana/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/datasources/datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml
- ./grafana/datasources/datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
- grafana_data:/var/lib/grafana
depends_on:
- prometheus
Expand Down
53 changes: 32 additions & 21 deletions containers/orchestration/monitoring.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,28 @@
This document outlines the components, concepts, and scope of the alerting and monitoring technologies used to track the DIBBs Orchestration Service. While it's not intended to be comprehensive, it should provide a high-level contextual overview of how and why various monitoring pieces are working together.

## Contents
* [Packages](#packages)
* [What Are The Pieces](#what-are-the-pieces-concepts-components-and-terms)
* [Types of Telemetry](#types-of-telemetry)
* [OpenTelemetry Basics and Configuration](#opentelemetry-basics-and-configuration)
* [The OTel Collector](#the-otel-collector)
* [The OTel Collector and Jaeger](#the-otel-collector-and-jaeger)
* [The OTel Collector and Prometheus](#the-otel-collector-and-prometheus)
* [How We've Set Things Up](#how-weve-set-things-up)
* [The Monitoring Flow](#the-monitoring-flow-this-creates)
* [Where This Happens In The Code](#where-this-happens-in-the-code)
* [Manual Tracing Instrumentation: Notes and Practices](#manual-tracing-instrumentation-notes-and-practices)
- [Monitoring Systems Overview](#monitoring-systems-overview)
- [Contents](#contents)
- [Packages](#packages)
- [What Are The Pieces: Concepts, Components, and Terms](#what-are-the-pieces-concepts-components-and-terms)
- [Types of Telemetry](#types-of-telemetry)
- [OpenTelemetry Basics and Configuration](#opentelemetry-basics-and-configuration)
- [The OTel Collector](#the-otel-collector)
- [The OTel Collector and Jaeger](#the-otel-collector-and-jaeger)
- [The OTel Collector and Prometheus](#the-otel-collector-and-prometheus)
- [How We've Set Things Up](#how-weve-set-things-up)
- [The Monitoring Flow This Creates](#the-monitoring-flow-this-creates)
- [Where This Happens In The Code](#where-this-happens-in-the-code)
- [`Dockerfile`](#dockerfile)
- [`main.py`](#mainpy)
- [`docker-compose.yaml`](#docker-composeyaml)
- [`otel-collector-config.yaml`](#otel-collector-configyaml)
- [`jaeger-ui.json`](#jaeger-uijson)
- [`prometheus.yaml`](#prometheusyaml)
- [`grafana.ini`](#grafanaini)
- [`grafana/dashboards/dashboard.yaml`](#grafanadashboardsdashboardyaml)
- [`grafana/datasource/datasource.yaml`](#grafanadatasourcedatasourceyaml)
- [Manual Tracing Instrumentation: Notes and Practices](#manual-tracing-instrumentation-notes-and-practices)


## Packages
Expand Down Expand Up @@ -65,13 +76,13 @@ Prometheus, on the other hand, doesn't play quite so nicely. Confusingly, OpenTe

* We start by using OpenTelemetry's automatic instrumentation to create all of the Provider-level instrumentation contexts for the Orchestration Service. This is executed in the project's `Dockerfile`.
* We supplement the automatic instrumentation with manual instrumentation for both metrics and traces/spans for the `process-message` endpoint. This is instrumented in `main.py`.
* We spin up local instances of the OTel Collector, Jaeger, Prometheus, and Grafana, making sure to expose specific relative ports and endpoints. This is executed in the project's `docker-compose.yml`.
* We spin up local instances of the OTel Collector, Jaeger, Prometheus, and Grafana, making sure to expose specific relative ports and endpoints. This is executed in the project's `docker-compose.yaml`.
* We configure the OTel Collector with the listening/transmitting mechanisms we want to use, and then activate the Collector with a service pipeline. This is set up in `otel-collector-config.yaml`.
* We configure the Jaeger UI with a few visual options to customize how we want our menus laid out when data reaches the backend. This is set up in `jaeger-ui.json`.
* We configure Prometheus with a scrape job to fetch the metrics that the OTel Collector put together for us based on our instrumentation. This is configured in `prometheus.yml`.
* We configure Prometheus with a scrape job to fetch the metrics that the OTel Collector put together for us based on our instrumentation. This is configured in `prometheus.yaml`.
* We link Prometheus to Grafana as its database source so that it can pull metrics from Prometheus' local DB into visualization dashboards.
* We configure `grafana/datasources/datasources.yml` to specify Prometheus connection as the default data source to use for the visualization in Grafana.
* We configure `grafana/dashboards/dashboard.yml` for the Dashboards we use throughout Grafana. New dashboards can be created in Grafana UI then downloaded as JSON and saved within the `grafana/dashboards` folder for use by any user logging into `localhost:4000`.
* We configure `grafana/datasources/datasources.yaml` to specify Prometheus connection as the default data source to use for the visualization in Grafana.
* We configure `grafana/dashboards/dashboard.yaml` for the Dashboards we use throughout Grafana. New dashboards can be created in Grafana UI then downloaded as JSON and saved within the `grafana/dashboards` folder for use by any user logging into `localhost:4000`.

## The Monitoring Flow This Creates

Expand All @@ -95,8 +106,8 @@ When an API request hits the `process-message` endpoint of the Orchestration Ser
* Any and all data that's been sent to `/metrics` without the `/metrics` endpoint being hit continues to remain there until the Prometheus `scrape_interval` hits.
* The local Prometheus server makes an API call to the Collector's `8889/metrics` endpoint, scraping the still-OTLP encoded data into its own local, in-container database.
* Prometheus data volumes are OTLP compatible, so the data doesn't need to be further encoded or modified before it's finally transferred to its end-destination mounted storage volume.
* Grafana then is configured via the `datasources.yml` file to use Prometheus as the data source.
* Grafana's UI can be accessed by navigating to `localhost:4000` (the endpoint specified in `docker-compose.yml` and `grafana.ini`) to create and view dashboards.
* Grafana then is configured via the `datasources.yaml` file to use Prometheus as the data source.
* Grafana's UI can be accessed by navigating to `localhost:4000` (the endpoint specified in `docker-compose.yaml` and `grafana.ini`) to create and view dashboards.
* In addition to dashboards, Grafana offers an Explore mode to query any data exposed by Prometheus to directly investigate metrics or issues.

## Where This Happens In The Code
Expand All @@ -113,7 +124,7 @@ We `pip install` and then execute a bootstrap command for an OpenTelemetry distr

The top of the code shows a bit of manual instrumentation to create metrics for each endpoint that might be hit with a message processing request. Note that we don't need to create a Provider, since auto instrumentation does that for us, but we do still need to create a Meter (which is an Agent in the pattern) and then a Metric (which is an instrument).

### `docker-compose.yml`
### `docker-compose.yaml`

The project's docker compose file is the heart of the connections needed to successfully emit, collect, and report telemetry data. In addition to spinning up the DIBBs services that Orchestration relies on, we also instantiate four other services needed for telemetry reporting:

Expand All @@ -134,7 +145,7 @@ This file defines the configuration parameters for the OTel Collector we'll inst

A simple graphics specification file that tells Jaeger how we want our menu options to look when we open the Jaeger monitoring dashboard at `localhost:16686`. Our configuration options let the UI know which menu options we want visible, how far back to display trace query results, and have a pair of external-facing routes to connect to our other monitoring tools for easy switching.

### `prometheus.yml`
### `prometheus.yaml`

A simple specification that tells Prometheus how to collect the metrics data. All prometheus config files must start with the `global` and `scrape_interval` header, and then feature a `scrape_config` section that defines the jobs Prometheus will perform. The Orchestration Monitor has a single job, which is to aggregate telemetry metrics from the OTel Collector, so we provide port 8889 on host `otel-collector` as the target to get those metrics.

Expand All @@ -149,11 +160,11 @@ The primary configuration file for Grafana specifices various settings to custom
* `security` settings is where the admin_user and admin_password can be specified.
* `users` are user-specific settings. Right now, we allow users to sign-in and create accounts, but we may want to revisit if we want to create a user hierarchy (admin, read-only user, etc.).

### `grafana/dashboards/dashboard.yml`
### `grafana/dashboards/dashboard.yaml`

This is a simple yaml file that specifies that dashboards saved here should be pulled in and loaded to the Grafana UI. New dashboards can be created in the UI then downloaded and saved here as JSON.

### `grafana/datasource/datasource.yml`
### `grafana/datasource/datasource.yaml`

This is a simple yaml file that specifies the default data source for Grafana, which for our purposes is `prometheus:9090`.

Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion containers/orchestration/tests/integration/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ def setup(request):
print("Setting up tests...")
path = Path(__file__).resolve().parent.parent.parent
load_dotenv(dotenv_path=os.path.join(path, ".env"))
compose_file_name = os.path.join(path, "docker-compose.yml")
compose_file_name = os.path.join(path, "docker-compose.yaml")
orchestration_service = DockerCompose(path, compose_file_name=compose_file_name)

orchestration_service.start()
Expand Down
2 changes: 1 addition & 1 deletion containers/record-linkage/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@ COPY ./migrations /code/migrations
COPY ./assets /code/assets

EXPOSE 8080
CMD uvicorn app.main:app --host 0.0.0.0 --port 8080 --log-config app/log_config.yml
CMD uvicorn app.main:app --host 0.0.0.0 --port 8080 --log-config app/log_config.yaml
Loading

0 comments on commit 5cccf7e

Please sign in to comment.