Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(ui-hydro): add missing matrix path encoding #1939

Closed
wants to merge 36 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
188099c
Merge remote-tracking branch 'origin/release/2.16.4' into dev
skamril Feb 12, 2024
96fb9c9
feature(raw): add endpoint for matrices download
MartinBelthle Feb 8, 2024
34cc269
refactor(api-download): rename `ExpectedFormatTypes` into `TableExpor…
laurent-laporte-pro Feb 8, 2024
e58c917
refactor(api-download): turn `TableExportFormat` enum into case-insen…
laurent-laporte-pro Feb 8, 2024
75b3a20
refactor(api-download): remplace "csv" by "tsv" in `TableExportFormat…
laurent-laporte-pro Feb 8, 2024
62d24b0
refactor(api-download): add `suffix` and `media_type` properties to `…
laurent-laporte-pro Feb 8, 2024
793c17c
refactor(api-download): add `export_table` method (to replace `_creat…
laurent-laporte-pro Feb 8, 2024
9ee16bc
refactor(api-download): rename endpoint parameters and use `alias`, `…
laurent-laporte-pro Feb 8, 2024
2e46cbf
refactor(api-download): rename the parameters and variables from `wit…
laurent-laporte-pro Feb 8, 2024
f3925c6
refactor(api-download): correct spelling in unit tests
laurent-laporte-pro Feb 8, 2024
3cf81d7
refactor(api-download): correct implementation of `get_matrix_with_in…
laurent-laporte-pro Feb 8, 2024
05e69f5
refactor(api-download): move `MatrixProfile` and specifics matrix in …
laurent-laporte-pro Feb 9, 2024
da198d8
refactor(api-download): simplify implementation of `SPECIFIC_MATRICES`
laurent-laporte-pro Feb 9, 2024
bff6b0c
refactor(api-download): simplify unit tests
laurent-laporte-pro Feb 9, 2024
5a868ce
refactor(api-download): use `user_access_token` instead of the admin …
laurent-laporte-pro Feb 9, 2024
56802d4
refactor(api-download): correct spelling of the month name "July"
laurent-laporte-pro Feb 9, 2024
af99e6b
refactor(api-download): add missing "res" variable in unit tests
laurent-laporte-pro Feb 9, 2024
5049503
refactor(api-download): turn `_SPECIFIC_MATRICES` into a protected va…
laurent-laporte-pro Feb 9, 2024
27af88b
refactor(api-download): simplify and document the `matrix_profile` mo…
laurent-laporte-pro Feb 9, 2024
511df69
refactor(api-download): add missing header to classic times series
laurent-laporte-pro Feb 12, 2024
4509af4
feat(raw): add endpoint for matrix download (#1906)
skamril Feb 13, 2024
4c60fa2
ci: add commitlint GitHub action (#1933)
skamril Feb 13, 2024
bd76b9a
feat(hydro): add the "Min Gen." tab for hydraulic generators
laurent-laporte-pro Feb 12, 2024
2889240
docs(hydro): add a screenshot of the "Min Gen." tab in the documentation
laurent-laporte-pro Feb 13, 2024
e576108
fix(ui-hydro): remove dots from labels and add `studyVersion` missing…
hdinia Feb 13, 2024
a6e9bb6
feat(hydro): add "Min Gen." tab to Hydro section for studies in v8.6 …
laurent-laporte-pro Feb 13, 2024
23a813c
feat(tags-db): populate `tag` and `study_tag` tables using pre-existi…
mabw-rte Feb 14, 2024
2cfa552
fix(tags-db): correct `tag` and `study_tag` migration script
laurent-laporte-pro Feb 14, 2024
8bdc837
fix(tags-db): correct `tag` and `study_tag` migration script (#1934)
laurent-laporte-pro Feb 15, 2024
596c486
perf(watcher): change db queries to improve Watcher scanning perfs
olfamizen Jan 8, 2024
1ce2beb
refactor(studies-db): change signature of `get` method in `StudyMetad…
laurent-laporte-pro Feb 14, 2024
403087e
fix(studies-db): correct the many-to-many relationship between `Study…
laurent-laporte-pro Feb 14, 2024
7c3c5cd
fix(db): add a migration script to correct the many-to-many relations…
laurent-laporte-pro Feb 14, 2024
d0370cd
test: correct issue with test_synthesis.py
laurent-laporte-pro Feb 16, 2024
5c269c6
perf(watcher): improve performance of the Watcher service (#1888)
laurent-laporte-pro Feb 16, 2024
ca92f1a
fix(ui-hydro): add missing matrix path encoding
hdinia Feb 20, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions .github/workflows/commitlint.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
name: Lint Commit Messages
on: [pull_request]

permissions:
contents: read
pull-requests: read

jobs:
commitlint:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- uses: wagoid/commitlint-github-action@v5
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
"""
Populate `tag` and `study_tag` tables from `patch` field in `study_additional_data` table

Revision ID: dae93f1d9110
Revises: 3c70366b10ea
Create Date: 2024-02-08 10:30:20.590919
"""
import collections
import itertools
import json
import secrets

import sqlalchemy as sa # type: ignore
from alembic import op
from sqlalchemy.engine import Connection # type: ignore

from antarest.study.css4_colors import COLOR_NAMES

# revision identifiers, used by Alembic.
revision = "dae93f1d9110"
down_revision = "3c70366b10ea"
branch_labels = None
depends_on = None


def upgrade() -> None:
"""
Populate `tag` and `study_tag` tables from `patch` field in `study_additional_data` table

Four steps to proceed:
- Retrieve study-tags pairs from patches in `study_additional_data`.
- Delete all rows in `tag` and `study_tag`, as tag updates between revised 3c70366b10ea and this version,
do modify the data in patches alongside the two previous tables.
- Populate `tag` table using unique tag-labels and by randomly generating their associated colors.
- Populate `study_tag` using study-tags pairs.
"""

# create connexion to the db
connexion: Connection = op.get_bind()

# retrieve the tags and the study-tag pairs from the db
study_tags = connexion.execute("SELECT study_id,patch FROM study_additional_data")
tags_by_ids = {}
for study_id, patch in study_tags:
obj = json.loads(patch or "{}")
study = obj.get("study") or {}
tags = frozenset(study.get("tags") or ())
tags_by_ids[study_id] = tags

# delete rows in tables `tag` and `study_tag`
connexion.execute("DELETE FROM study_tag")
connexion.execute("DELETE FROM tag")

# insert the tags in the `tag` table
labels = set(itertools.chain.from_iterable(tags_by_ids.values()))
bulk_tags = [{"label": label, "color": secrets.choice(COLOR_NAMES)} for label in labels]
if bulk_tags:
sql = sa.text("INSERT INTO tag (label, color) VALUES (:label, :color)")
connexion.execute(sql, *bulk_tags)

# Create relationships between studies and tags in the `study_tag` table
bulk_study_tags = [{"study_id": id_, "tag_label": lbl} for id_, tags in tags_by_ids.items() for lbl in tags]
if bulk_study_tags:
sql = sa.text("INSERT INTO study_tag (study_id, tag_label) VALUES (:study_id, :tag_label)")
connexion.execute(sql, *bulk_study_tags)


def downgrade() -> None:
"""
Restore `patch` field in `study_additional_data` from `tag` and `study_tag` tables

Three steps to proceed:
- Retrieve study-tags pairs from `study_tag` table.
- Update patches study-tags in `study_additional_data` using these pairs.
- Delete all rows from `tag` and `study_tag`.
"""
# create a connection to the db
connexion: Connection = op.get_bind()

# Creating the `tags_by_ids` mapping from data in the `study_tags` table
tags_by_ids = collections.defaultdict(set)
study_tags = connexion.execute("SELECT study_id, tag_label FROM study_tag")
for study_id, tag_label in study_tags:
tags_by_ids[study_id].add(tag_label)

# Then, we read objects from the `patch` field of the `study_additional_data` table
objects_by_ids = {}
study_tags = connexion.execute("SELECT study_id, patch FROM study_additional_data")
for study_id, patch in study_tags:
obj = json.loads(patch or "{}")
obj["study"] = obj.get("study") or {}
obj["study"]["tags"] = obj["study"].get("tags") or []
obj["study"]["tags"] = sorted(tags_by_ids[study_id] | set(obj["study"]["tags"]))
objects_by_ids[study_id] = obj

# Updating objects in the `study_additional_data` table
bulk_patches = [{"study_id": id_, "patch": json.dumps(obj)} for id_, obj in objects_by_ids.items()]
if bulk_patches:
sql = sa.text("UPDATE study_additional_data SET patch = :patch WHERE study_id = :study_id")
connexion.execute(sql, *bulk_patches)

# Deleting study_tags and tags
connexion.execute("DELETE FROM study_tag")
connexion.execute("DELETE FROM tag")
86 changes: 86 additions & 0 deletions alembic/versions/fd73601a9075_add_delete_cascade_studies.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
"""
Add delete cascade constraint to study foreign keys

Revision ID: fd73601a9075
Revises: 3c70366b10ea
Create Date: 2024-02-12 17:27:37.314443
"""
import sqlalchemy as sa # type: ignore
from alembic import op

# revision identifiers, used by Alembic.
revision = "fd73601a9075"
down_revision = "dae93f1d9110"
branch_labels = None
depends_on = None

# noinspection SpellCheckingInspection
RAWSTUDY_FK = "rawstudy_id_fkey"

# noinspection SpellCheckingInspection
VARIANTSTUDY_FK = "variantstudy_id_fkey"

# noinspection SpellCheckingInspection
STUDY_ADDITIONAL_DATA_FK = "study_additional_data_study_id_fkey"


def upgrade() -> None:
dialect_name: str = op.get_context().dialect.name

# SQLite doesn't support dropping foreign keys, so we need to ignore it here
if dialect_name == "postgresql":
with op.batch_alter_table("rawstudy", schema=None) as batch_op:
batch_op.drop_constraint(RAWSTUDY_FK, type_="foreignkey")
batch_op.create_foreign_key(RAWSTUDY_FK, "study", ["id"], ["id"], ondelete="CASCADE")

with op.batch_alter_table("study_additional_data", schema=None) as batch_op:
batch_op.drop_constraint(STUDY_ADDITIONAL_DATA_FK, type_="foreignkey")
batch_op.create_foreign_key(STUDY_ADDITIONAL_DATA_FK, "study", ["study_id"], ["id"], ondelete="CASCADE")

with op.batch_alter_table("variantstudy", schema=None) as batch_op:
batch_op.drop_constraint(VARIANTSTUDY_FK, type_="foreignkey")
batch_op.create_foreign_key(VARIANTSTUDY_FK, "study", ["id"], ["id"], ondelete="CASCADE")

with op.batch_alter_table("group_metadata", schema=None) as batch_op:
batch_op.alter_column("group_id", existing_type=sa.VARCHAR(length=36), nullable=False)
batch_op.alter_column("study_id", existing_type=sa.VARCHAR(length=36), nullable=False)
batch_op.create_index(batch_op.f("ix_group_metadata_group_id"), ["group_id"], unique=False)
batch_op.create_index(batch_op.f("ix_group_metadata_study_id"), ["study_id"], unique=False)
if dialect_name == "postgresql":
batch_op.drop_constraint("group_metadata_group_id_fkey", type_="foreignkey")
batch_op.drop_constraint("group_metadata_study_id_fkey", type_="foreignkey")
batch_op.create_foreign_key(
"group_metadata_group_id_fkey", "groups", ["group_id"], ["id"], ondelete="CASCADE"
)
batch_op.create_foreign_key(
"group_metadata_study_id_fkey", "study", ["study_id"], ["id"], ondelete="CASCADE"
)


def downgrade() -> None:
dialect_name: str = op.get_context().dialect.name
# SQLite doesn't support dropping foreign keys, so we need to ignore it here
if dialect_name == "postgresql":
with op.batch_alter_table("rawstudy", schema=None) as batch_op:
batch_op.drop_constraint(RAWSTUDY_FK, type_="foreignkey")
batch_op.create_foreign_key(RAWSTUDY_FK, "study", ["id"], ["id"])

with op.batch_alter_table("study_additional_data", schema=None) as batch_op:
batch_op.drop_constraint(STUDY_ADDITIONAL_DATA_FK, type_="foreignkey")
batch_op.create_foreign_key(STUDY_ADDITIONAL_DATA_FK, "study", ["study_id"], ["id"])

with op.batch_alter_table("variantstudy", schema=None) as batch_op:
batch_op.drop_constraint(VARIANTSTUDY_FK, type_="foreignkey")
batch_op.create_foreign_key(VARIANTSTUDY_FK, "study", ["id"], ["id"])

with op.batch_alter_table("group_metadata", schema=None) as batch_op:
# SQLite doesn't support dropping foreign keys, so we need to ignore it here
if dialect_name == "postgresql":
batch_op.drop_constraint("group_metadata_study_id_fkey", type_="foreignkey")
batch_op.drop_constraint("group_metadata_group_id_fkey", type_="foreignkey")
batch_op.create_foreign_key("group_metadata_study_id_fkey", "study", ["study_id"], ["id"])
batch_op.create_foreign_key("group_metadata_group_id_fkey", "groups", ["group_id"], ["id"])
batch_op.drop_index(batch_op.f("ix_group_metadata_study_id"))
batch_op.drop_index(batch_op.f("ix_group_metadata_group_id"))
batch_op.alter_column("study_id", existing_type=sa.VARCHAR(length=36), nullable=True)
batch_op.alter_column("group_id", existing_type=sa.VARCHAR(length=36), nullable=True)
42 changes: 24 additions & 18 deletions antarest/core/filetransfer/service.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,8 @@ def request_download(
filename: str,
name: Optional[str] = None,
owner: Optional[JWTUser] = None,
use_notification: bool = True,
expiration_time_in_minutes: int = 0,
) -> FileDownload:
fh, path = tempfile.mkstemp(dir=self.tmp_dir, suffix=filename)
os.close(fh)
Expand All @@ -55,36 +57,40 @@ def request_download(
path=str(tmpfile),
owner=owner.impersonator if owner is not None else None,
expiration_date=datetime.datetime.utcnow()
+ datetime.timedelta(minutes=self.download_default_expiration_timeout_minutes),
+ datetime.timedelta(
minutes=expiration_time_in_minutes or self.download_default_expiration_timeout_minutes
),
)
self.repository.add(download)
self.event_bus.push(
Event(
type=EventType.DOWNLOAD_CREATED,
payload=download.to_dto(),
permissions=PermissionInfo(owner=owner.impersonator)
if owner
else PermissionInfo(public_mode=PublicMode.READ),
if use_notification:
self.event_bus.push(
Event(
type=EventType.DOWNLOAD_CREATED,
payload=download.to_dto(),
permissions=PermissionInfo(owner=owner.impersonator)
if owner
else PermissionInfo(public_mode=PublicMode.READ),
)
)
)
return download

def set_ready(self, download_id: str) -> None:
def set_ready(self, download_id: str, use_notification: bool = True) -> None:
download = self.repository.get(download_id)
if not download:
raise FileDownloadNotFound()

download.ready = True
self.repository.save(download)
self.event_bus.push(
Event(
type=EventType.DOWNLOAD_READY,
payload=download.to_dto(),
permissions=PermissionInfo(owner=download.owner)
if download.owner
else PermissionInfo(public_mode=PublicMode.READ),
if use_notification:
self.event_bus.push(
Event(
type=EventType.DOWNLOAD_READY,
payload=download.to_dto(),
permissions=PermissionInfo(owner=download.owner)
if download.owner
else PermissionInfo(public_mode=PublicMode.READ),
)
)
)

def fail(self, download_id: str, reason: str = "") -> None:
download = self.repository.get(download_id)
Expand Down
49 changes: 34 additions & 15 deletions antarest/study/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,6 @@
Integer,
PrimaryKeyConstraint,
String,
Table,
)
from sqlalchemy.orm import relationship # type: ignore

Expand Down Expand Up @@ -50,21 +49,40 @@

NEW_DEFAULT_STUDY_VERSION: str = "860"

groups_metadata = Table(
"group_metadata",
Base.metadata,
Column("group_id", String(36), ForeignKey("groups.id")),
Column("study_id", String(36), ForeignKey("study.id")),
)

class StudyGroup(Base): # type:ignore
"""
A table to manage the many-to-many relationship between `Study` and `Group`

Attributes:
study_id: The ID of the study associated with the group.
group_id: The IS of the group associated with the study.
"""

__tablename__ = "group_metadata"
__table_args__ = (PrimaryKeyConstraint("study_id", "group_id"),)

group_id: str = Column(String(36), ForeignKey("groups.id", ondelete="CASCADE"), index=True, nullable=False)
study_id: str = Column(String(36), ForeignKey("study.id", ondelete="CASCADE"), index=True, nullable=False)

def __str__(self) -> str: # pragma: no cover
cls_name = self.__class__.__name__
return f"[{cls_name}] study_id={self.study_id}, group={self.group_id}"

def __repr__(self) -> str: # pragma: no cover
cls_name = self.__class__.__name__
study_id = self.study_id
group_id = self.group_id
return f"{cls_name}({study_id=}, {group_id=})"


class StudyTag(Base): # type:ignore
"""
A table to manage the many-to-many relationship between `Study` and `Tag`

Attributes:
study_id (str): The ID of the study associated with the tag.
tag_label (str): The label of the tag associated with the study.
study_id: The ID of the study associated with the tag.
tag_label: The label of the tag associated with the study.
"""

__tablename__ = "study_tag"
Expand All @@ -74,7 +92,8 @@ class StudyTag(Base): # type:ignore
tag_label: str = Column(String(40), ForeignKey("tag.label", ondelete="CASCADE"), index=True, nullable=False)

def __str__(self) -> str: # pragma: no cover
return f"[StudyTag] study_id={self.study_id}, tag={self.tag}"
cls_name = self.__class__.__name__
return f"[{cls_name}] study_id={self.study_id}, tag={self.tag}"

def __repr__(self) -> str: # pragma: no cover
cls_name = self.__class__.__name__
Expand All @@ -90,8 +109,8 @@ class Tag(Base): # type:ignore
This class is used to store tags associated with studies.

Attributes:
label (str): The label of the tag.
color (str): The color code associated with the tag.
label: The label of the tag.
color: The color code associated with the tag.
"""

__tablename__ = "tag"
Expand Down Expand Up @@ -130,7 +149,7 @@ class StudyAdditionalData(Base): # type:ignore

study_id = Column(
String(36),
ForeignKey("study.id"),
ForeignKey("study.id", ondelete="CASCADE"),
primary_key=True,
)
author = Column(String(255), default="Unknown")
Expand Down Expand Up @@ -174,7 +193,7 @@ class Study(Base): # type: ignore

tags: t.List[Tag] = relationship(Tag, secondary=StudyTag.__table__, back_populates="studies")
owner = relationship(Identity, uselist=False)
groups = relationship(Group, secondary=lambda: groups_metadata, cascade="")
groups = relationship(Group, secondary=StudyGroup.__table__, cascade="")
additional_data = relationship(
StudyAdditionalData,
uselist=False,
Expand Down Expand Up @@ -230,7 +249,7 @@ class RawStudy(Study):

id = Column(
String(36),
ForeignKey("study.id"),
ForeignKey("study.id", ondelete="CASCADE"),
primary_key=True,
)
content_status = Column(Enum(StudyContentStatus))
Expand Down
Loading
Loading