Skip to content

Commit

Permalink
Provide extra warnings and info for model switching
Browse files Browse the repository at this point in the history
  • Loading branch information
TimKoornstra committed Nov 10, 2023
1 parent ccf4595 commit c1f8ddb
Show file tree
Hide file tree
Showing 3 changed files with 11 additions and 4 deletions.
11 changes: 9 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -322,10 +322,17 @@ You can set these variables in your shell or use a script. An example script to
Once the API is up and running, you can send HTR requests using curl. Here's how:

```bash
curl -X POST -F "image=@$input_path" -F "group_id=$group_id" -F "identifier=$filename" -F "model=$model_path" http://localhost:5000/predict
curl -X POST -F "image=@$input_path" -F "group_id=$group_id" -F "identifier=$filename" http://localhost:5000/predict
```

Replace `$input_path`, `$group_id`, `$filename`, and `$model_path` with your specific values. The model processes the image, predicts the handwritten text, and saves the predictions in the specified output path (from the `LOGHI_OUTPUT_PATH` environment variable). The `model` field is optional, and allows you to dynamically switch the model used.
Replace `$input_path`, `$group_id`, and `$filename` with your respective file paths and identifiers. If you're considering switching the recognition model, use the `model` field cautiously:

- The `model` field (`-F "model=$model_path"`) allows for specifying which handwritten text recognition model the API should use for the current request.
- To avoid the slowdown associated with loading different models for each request, it is preferable to set a specific model before starting your API by using the `LOGHI_MODEL_PATH` environment variable.
- Only use the `model` field if you are certain that a different model is needed for a particular task and you understand its performance characteristics.

> [!WARNING]
> Continuous model switching with `$model_path` can lead to severe processing delays. For most users, it's best to set the `LOGHI_MODEL_PATH` once and use the same model consistently, restarting the API with a new variable only when necessary.
---

Expand Down
2 changes: 1 addition & 1 deletion src/api/batch_predictor.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ def batch_prediction_worker(prepared_queue: multiprocessing.JoinableQueue,
if model_path != old_model_path:
old_model_path = model_path
try:
logger.info("Model changed, adjusting batch prediction")
logger.warning("Model changed, adjusting batch prediction")
with strategy.scope():
model, utils = create_model(model_path)
logger.info("Model created and utilities initialized")
Expand Down
2 changes: 1 addition & 1 deletion src/api/image_preparator.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ def image_preparation_worker(batch_size: int,

# Check if the model has changed
if model_path and model_path != old_model:
logger.info(
logger.warning(
"Model changed, adjusting image preparation")
if batch_images:
# Add the existing batch to the prepared_queue
Expand Down

0 comments on commit c1f8ddb

Please sign in to comment.