This project uses a custom machine learning backend model for Named Entity Recognition (NER) with Hugging Face's transformers and Label Studio.
The model instantiates AutoModelForTokenClassification
from Hugging Face's transformers library and fine-tunes it on the NER task.
- If you want to use this model only in inference mode, it serves predictions from the pre-trained model.
- If you want to fine-tune the model, you can use the Label Studio interface to provide training data and train the model.
Read more about the compatible models from Hugging Face's official documentation.
Before you begin, you must install the Label Studio ML backend.
This tutorial uses the huggingface_ner
example.
This ML backend works with the default NER template from Label Studio. You can find this by selecting Label Studio's pre-built NER template when configuring the labeling interface. It is available under Natural Language Processing > Named Entity Recognition:
<View>
<Labels name="label" toName="text">
<Label value="PER" background="red"/>
<Label value="ORG" background="darkorange"/>
<Label value="LOC" background="orange"/>
<Label value="MISC" background="green"/>
</Labels>
<Text name="text" value="$text"/>
</View>
You can then customize the template to suit your needs (for example, modifying the label names). However, note the model outputs compatibility:
If you plan to use your model only for the inference, make sure the output label names are compatible with what is listed in XML labeling configuration. If you plan to train the model, you have to provide the baseline pretrained model that can be fine-tuned (i.e. where the last layer can be trained, for example,
distilbert/distilbert-base-uncased
). Otherwise, you may see the error about tensor sizes mismatch during training.
- Start the Machine Learning backend on
http://localhost:9090
with the prebuilt image:
docker-compose up
- Validate that backend is running
$ curl http://localhost:9090/
{"status":"UP"}
- Create a project in Label Studio. Then from the Model page in the project settings, connect the model. The default URL is
http://localhost:9090
.
To build the ML backend from source, you have to clone the repository and build the Docker image:
docker-compose build
To run the ML backend without Docker, you have to clone the repository and install all dependencies using pip:
python -m venv ml-backend
source ml-backend/bin/activate
pip install -r requirements.txt
Then you can start the ML backend:
label-studio-ml start ./huggingface_ner
Parameters can be set in docker-compose.yml
before running the container.
The following common parameters are available:
BASIC_AUTH_USER
- Specify the basic auth user for the model serverBASIC_AUTH_PASS
- Specify the basic auth password for the model serverLOG_LEVEL
- Set the log level for the model serverWORKERS
- Specify the number of workers for the model serverTHREADS
- Specify the number of threads for the model serverBASELINE_MODEL_NAME
: The name of the baseline model to use. Default isdslim/bert-base-NER
.FINETUNED_MODEL_NAME
: The name of the fine-tuned model. Default isfinetuned_model
.LABEL_STUDIO_HOST
: The host of the Label Studio instance. Default is 'http://localhost:8080'.LABEL_STUDIO_API_KEY
: The API key for the Label Studio instance.START_TRAINING_EACH_N_UPDATES
: The number of updates after which to start training. Default is10
.LEARNING_RATE
: The learning rate for the model. Default is1e-3
.NUM_TRAIN_EPOCHS
: The number of training epochs. Default is10
.WEIGHT_DECAY
: The weight decay for the model. Default is0.01
.MODEL_DIR
: The directory where the model is stored. Default is'./results'
.
Note: The
LABEL_STUDIO_API_KEY
is required for training the model. This can be found by logging into Label Studio and going to the Account & Settings page.
The ML backend can be customized by adding your own models and logic inside ./huggingface_ner/model.py
.
Modify the predict()
and fit()
methods to implement your own logic.