Skip to content

Commit

Permalink
few more renames
Browse files Browse the repository at this point in the history
  • Loading branch information
avishniakov committed Nov 14, 2023
1 parent b64bb15 commit 9416987
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 9 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@ You can follow [Data Validators docs](https://docs.zenml.io/stacks-and-component

As a last step concluding all work done so far, we will calculate predictions on the inference dataset and persist them in [Artifact Store](https://docs.zenml.io/stacks-and-components/component-guide/artifact-stores) attached to the current inference model version of the Model Control Plane for reuse and observability.

We will leverage a prepared predictions service called `mlflow_deployment` linked to the inference model version of the Model Control Plane to run `.predict()` and to put predictions as an output of the predictions step, so it is automatically stored in the [Artifact Store](https://docs.zenml.io/stacks-and-components/component-guide/artifact-stores) and linked to the Model Control Plane model version as a versioned artifact link with zero effort. This is achieved because we additionally annotated the `predictions` output with `ArtifactConfig(overwrite=False)`. This is required to deliver a comprehensive history to stakeholders since Batch Inference can be executed using the same Model Control Plane version multiple times.
We will leverage a prepared predictions service called `mlflow_deployment` linked to the inference model version of the Model Control Plane to run `.predict()` and to put predictions as an output of the predictions step, so it is automatically stored in the [Artifact Store](https://docs.zenml.io/stacks-and-components/component-guide/artifact-stores) and linked to the Model Control Plane model version as a versioned artifact link with zero effort. This is achieved because we additionally annotated the `predictions` output with `DataArtifactConfig(overwrite=False)`. This is required to deliver a comprehensive history to stakeholders since Batch Inference can be executed using the same Model Control Plane version multiple times.

```
NOTE: On non-local orchestrators a `model` artifact will be loaded into memory to run predictions directly. You can adapt this part to your needs.
Expand All @@ -318,12 +318,12 @@ NOTE: On non-local orchestrators a `model` artifact will be loaded into memory t
<summary>Code snippet 💻</summary>

```python
from zenml.model import ArtifactConfig
from zenml.model import DataArtifactConfig

@step
def inference_predict(
dataset_inf: pd.DataFrame,
) -> Annotated[pd.Series, "predictions", ArtifactConfig(overwrite=False)]:
) -> Annotated[pd.Series, "predictions", DataArtifactConfig(overwrite=False)]:
model_version = get_step_context().model_version

# get predictor
Expand Down
4 changes: 2 additions & 2 deletions template/steps/deployment/deployment_deploy.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
mlflow_model_registry_deployer_step,
)
from zenml.logger import get_logger
from zenml.model import DeploymentArtifactConfig
from zenml.model import EndpointArtifactConfig

logger = get_logger(__name__)

Expand All @@ -21,7 +21,7 @@ def deployment_deploy() -> (
Annotated[
Optional[MLFlowDeploymentService],
"mlflow_deployment",
DeploymentArtifactConfig(),
EndpointArtifactConfig(),
]
):
"""Predictions step.
Expand Down
4 changes: 2 additions & 2 deletions template/steps/etl/inference_data_preprocessor.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
import pandas as pd
from sklearn.pipeline import Pipeline
from zenml import step
from zenml.model import ArtifactConfig
from zenml.model import DataArtifactConfig


@step
Expand All @@ -17,7 +17,7 @@ def inference_data_preprocessor(
) -> Annotated[
pd.DataFrame,
"dataset_inf",
ArtifactConfig(overwrite=False, artifact_name="inference_dataset"),
DataArtifactConfig(overwrite=False, artifact_name="inference_dataset"),
]:
"""Data preprocessor step.
Expand Down
4 changes: 2 additions & 2 deletions template/steps/inference/inference_predict.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@
from zenml import get_step_context, step
from zenml.integrations.mlflow.services.mlflow_deployment import MLFlowDeploymentService
from zenml.logger import get_logger
from zenml.model import ArtifactConfig
from zenml.model import DataArtifactConfig

logger = get_logger(__name__)


@step
def inference_predict(
dataset_inf: pd.DataFrame,
) -> Annotated[pd.Series, "predictions", ArtifactConfig(overwrite=False)]:
) -> Annotated[pd.Series, "predictions", DataArtifactConfig(overwrite=False)]:
"""Predictions step.
This is an example of a predictions step that takes the data in and returns
Expand Down

0 comments on commit 9416987

Please sign in to comment.