Add new Compatibility
mode OR unify current Remote
and Hosting
mode
#16
Labels
enhancement
New feature or request
The logic of fetching models and model json files in
transformers.js
is as follows:For the scenarios of
env.allowLocalModels=true
andenv.allowRemoteModels=true (default)
, transformers.js will first try to get the model resources (including the model and model json file) from the local model path, and then try to get it from HuggingFace.Current implementation
Hosting
mode:Remote
mode:We cloud unify current 2 modes, just configure
env.allowLocalModels=true
, and then fetch models from local model path first, and then fetch it from HF.Another major reason is because of the
models jsons files(configs.json, tokenizer.json)
. Usually, the path to fetch these files follows the model files. If the configuration is to fetch the model files from HF, then these configuration files will also be obtained from HF. Because we have added these files into the repo, we hope to load the model json locally.Pros
remote
mode when HF is not accessible, the model json file could not be obtained, so even if the user had loaded theonnx
model, the examples still did not work.USE_REMOTE_MODELS
parameter inconfig.js
(this file is also tracked by Git, so it may change for every time build production)Cons
Devtools -> Networks
Development work
For the examples integrated from transformer.js, there are no additional code modifications. For the
stable transmission turbo -webgpu
example, we should refactor the model fetching logic.The text was updated successfully, but these errors were encountered: