Batch processing optimization (#66)
* Update lab_initialization.py
* Update pre_install.py
* Update zero_shot.py
* Update tag.py
* Update Dockerfile
* Update models.py
Removes a large zero shot model used until now, for a smaller, more efficient, with similar accuracy model.
We now are able to add it to the batch processing data pipeline and remove it from inter-item processing, accelerating the process & lot, reducing RAM usage over time.
The image will now be 1.2Gb smaller or so, and will have very fast item processing time, with all models loaded in RAM beforehand.