Skip to content

Commit

Permalink
Misc updates
Browse files Browse the repository at this point in the history
  • Loading branch information
rudolphpienaar committed Apr 30, 2024
1 parent bf1b961 commit cfceb39
Showing 1 changed file with 6 additions and 6 deletions.
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ The basic idea is simple: a client communicates with some remote server using ht

## `pfms` Specificities

Broadly speaking, `pfms` provides this exact behavior. However, it is uniquely tailored to providing services within the context of the [pl-monai_spleenseg](https://github.com/FNNDSC/pl-monai_spleenseg) ChRIS plugin. Indeed, `pfms` uses this exact plugin as a internal module to perform the same segmentation. Moreover, unlike more conventional MLop "model servers", `pfms` accepts as input NIfTI volumes and returns NIfTI volumes as resultants. This is considerable more efficient than a JSON serialization of an image.
Broadly speaking, `pfms` provides this exact behavior. However, it is uniquely tailored to providing services within the context of the [pl-monai_spleenseg](https://github.com/FNNDSC/pl-monai_spleenseg) ChRIS plugin. Indeed, `pfms` uses this exact plugin as a internal module to perform the same segmentation. Moreover, unlike more conventional MLop "model servers", `pfms` accepts as input NIfTI volumes and returns NIfTI volumes as resultants. This is considerably more efficient than a JSON serialization and deserialization of payload data to encode an image.


## `pfms` Deployment
Expand All @@ -33,7 +33,7 @@ docker build --build-arg UID=UID -t local/pfms .

### dockerhub

To use the version available on dockerhub:
To use the version available on dockerhub (note, might not be available at time of reading):

```bash
docker pull fnndsc/pfms
Expand All @@ -44,7 +44,7 @@ docker pull fnndsc/pfms
To start the services

```bash
SESSIONUSER=localhost
SESSIONUSER=localuser
docker run --gpus all --privileged \
--env SESSIONUSER= $SESSIONUSER \
--name pfms --rm -it -d \
Expand All @@ -55,7 +55,7 @@ docker run --gpus all --privileged \
To start with source code debugging and live refreshing:

```bash
SESSIONUSER=localhost
SESSIONUSER=localuser
docker run --gpus all --privileged \
--env SESSIONUSER= $SESSIONUSER \
--name pfms --rm -it -d \
Expand All @@ -70,7 +70,7 @@ docker run --gpus all --privileged \

### Upload model file in pth format

`pfms` can host/provide multiple "models" -- a model is understood here to be simply a pre-trained weights file in `pth` format as generated by `pl-monai_spleenseg` during a training phase. This `pth` file can be uploaded to `pfms` by POSTing the file this endpoint:
`pfms` can host/provide multiple "models" -- a model is understood here to be simply a pre-trained weights file in `pth` format as generated by `pl-monai_spleenseg` during a training phase. This `pth` file can be uploaded to `pfms` by POSTing the file to this endpoint:

```html
POST :2024/api/v1/spleenseg/modelpth/?modelID=<modelID>
Expand All @@ -86,7 +86,7 @@ To run the segmentation on a volume using a model file, `POST` the NIfTI volume
POST :2024/api/v1/spleenseg/NIfTIinference/?modelID=<modelID>
```

Here, a NIfTI volume is passed as a `FileUpload` request. The `pfms` instance will save/unpack this file within itself, and then run the `pl-monai_spleenseg` inference mode using as model weights the data in the `<modelID>`. The resultant NIfTI file is read and streamed back to the caller, which will typically save this file to disk or do further processing.
Here, a NIfTI volume is passed as a `FileUpload` request. The `pfms` instance will save/unpack this file within itself, and then run the `pl-monai_spleenseg` inference mode using as model weights the `pth` file associated with `<modelID>`. The resultant NIfTI file, stored within the server, is then read and streamed back to the caller, which will typically save this file to disk or do further processing.

**Note that this call will block until processing complete!** Processing (depending on network speed, etc) is typically less than 30 seconds.

Expand Down

0 comments on commit cfceb39

Please sign in to comment.