Releases: nebuly-ai/optimate
v0.5.0
nebullvm 0.5.0 Release Notes
This release of Nebullvm simplifies the needed requirements and adds various improvements in code stability.
New Features
- All frameworks are not required anymore for running nebullvm
- Compilers are not installed anymore by default the first time nebullvm is imported.
- From the Auto-Installer the users can select which libraries and compilers they want to use.
- Improve test coverage.
Bug fixed
- Fixed multiple bugs while using TF interface
Contributors
- Diego Fiori (@morgoth95)
- Valerio Sofi (@valeriosofi)
v0.4.4
nebullvm 0.4.4 Release Notes
This release of Nebullvm provides new optimizers and various improvements in code stability.
New Features
- Update notebooks with new api.
- Improve test coverage.
- Add Intel Neural compressor pruning and quantization.
- The computation of the latency of the models now uses all the data and not only the first sample.
- Dynamic shape of openvino has been updated with the new method available from version 2
- Now the optimized model is discarted if the result is different from the original model (
metric_drop_ths=0
)
Bug fixed
- Fix an issue during onnx quantization, now it's much faster than before.
- Fix a tensor RT bug in static quantization with onnx interface.
- Fixes and improvements on the torchscript compiler: now it supports also trace and torch.fx for tracing the model.
- Fix a bug on macos related to ONNX and int8 quantization.
- Fix a bug on sparseml that prevented it from working on colab.
- Bug-fixes on the deepsparse compiler.
- Fixes and improvements on the onnx internal model handling.
- Fix an issue on tensorflow backend.
- Fixes on torch and onnx tensorrt with transformers.
- Fix a bug on tensor rt static quantization when using a new version of polygraphy
- Fix a bug on huggingface when passing the tokenizer to the optimize_model function
- Fix a bug when using quantization with a few data
Contributors
- Diego Fiori (@morgoth95)
- Valerio Sofi (@valeriosofi)
v0.4.3
nebullvm 0.4.3 Release Notes
Minor release that fixes some bugs added in v0.4.2.
Bug fixed
- Fix bug preventing the installation without TensorFlow.
- Fix a bug while using the HuggingFace Interface
Contributors
- Diego Fiori (@morgoth95)
- Valerio Sofi (@valeriosofi)
v0.4.2
nebullvm 0.4.2 Release Notes
Minor release that fixes some bugs and reduces the number of strict requirements needed to run Nebullvm.
New Features
- Support
ignore_compilers
also fortorchscript
andtflite
- Tensorflow is not a strict nebullvm requirement anymore.
Bug fixed
- Solve bug on half-precision with onnx-runtime
- Fix a bug on tensor rt quantization: numpy arrays were passed to inference learner instead of tensors.
Contributors
- Diego Fiori (@morgoth95)
- Valerio Sofi (@valeriosofi)
v0.4.1
nebullvm 0.4.1 Release Notes
Minor release fixing some bugs and extending support for TensorRT directly with the PyTorch interface.
New Features
- Support for TensorRT directly with PyTorch models.
Bug fixed
- Bug in conversion to onnx that could lead to wrong inference results
Contributors
- Diego Fiori (@morgoth95)
- Valerio Sofi (@valeriosofi)
v0.4.0
nebullvm 0.4.0 Release Notes
"One API to rule them all". This major release of Nebullvm provides a brand new API unique to all Deep Learning frameworks.
New Features
- New unique API for all the Deep Learning frameworks.
- Support for SparseML pruning.
- Beta-feature Support for Intel-Neural-Compressor's Pruning.
- Add support for BladeDISC compiler.
- Modify the latency calculation for each model by using the median instead of the mean across different model runs.
- Implement an early stop mechanism for latency computation.
Bug fixed
- Fix bug with HuggingFace models causing a failure during optimizations.
Contributors
- Diego Fiori (@morgoth95)
- Valerio Sofi (@valeriosofi)
- Reiase (@reiase)
v0.3.2
nebullvm 0.3.2 Release Notes
Minor release for maintenance purposes. It fixes bugs and generally improves the code stability.
New Features
- In the Pytorch framework, whenever input data is provided for optimization, the model converter also uses it during the conversion of the model to onnx, instead of using the data only at the stage of applying the "precision reduction techniques."
Bug fixed
- Fix bug with OpenVino 2.0 not working with 1-dimensional arrays.
- Fix bug while using TensorRT engine which was returning cpu-tensors also when input tensors where on GPU.
- Fix requirements conflicts on Intel CPUs due to an old numpy version required by OpenVino.
Contributors
- Diego Fiori (@morgoth95)
- Valerio Sofi (@valeriosofi)
- SolomidHero (@SolomidHero)
- Emile Courthoud (@emilecourthoud)
v0.3.1
nebullvm 0.3.1 Release Notes
We are pleased to announce that we have added the option to run nebullvm from a Docker container. We provide both a Docker image on Docker Hub and the Dockerfile code to produce the Docker container directly from the latest version of the source code.
New Features
- Add Dockerfile and upload docker images on Docker Hub.
- Implement new backend for the Tensorflow API running on top of TensorFlow and TFLite.
- Implement new backend for the PyTorch API running on top of TorchScript.
Bug fixed
- Fix bug with TensorRT in the Tensorflow API.
- Fix bug with OpenVino 2.0 not using the quantization on intel devices.
Contributors
- Diego Fiori (@morgoth95)
- Valerio Sofi (@valeriosofi)
- Emile Courthoud (@emilecourthoud)
v0.3.0
nebullvm 0.3.0 Release Notes
We are super excited to announce the new major release nebullvm 0.3.0
, where nebullvm
's AI inference accelerator becomes more powerful, stable and covers more use cases.
nebullvm
is an open-source library that generates an optimized version of your deep learning model that runs 2-10 times faster in inference without performance loss by leveraging multiple deep learning compilers (OpenVINO, TensorRT, etc.). With the new release 0.3.0, nebullvm can now accelerate inference up to 30x if you specify that you are willing to trade off a self-defined amount of accuracy/precision to get an even lower response time and a lighter model. This additional acceleration is achieved by exploiting optimization techniques that slightly modify the model graph to make it lighter, such as quantization, half precision, distillation, sparsity, etc.
Find tutorials and examples on how to use nebullvm
, as well as installation instructions in the main readme of nebullvm
library. And check below if you want to learn more about
- Overview of Nebullvm 0.3.0
- Benchmarks
- How the new Nebullvm 0.3.0 API Works
- New Features & Bug Fixes
Overview of Nebullvm
With this new version, nebullvm continues in its mission to be:
☘️ Easy-to-use. It takes a few lines of code to install the library and optimize your models.
🔥 Framework agnostic. nebullvm supports the most widely used frameworks (PyTorch, TensorFlow, 🆕ONNX🆕 and Hugging Face, etc.) and provides as output an optimized version of your model with the same interface (PyTorch, TensorFlow, etc.).
💻 Deep learning model agnostic. nebullvm supports all the most popular deep learning architectures such as transformers, LSTM, CNN and FCN.
🤖 Hardware agnostic. The library now works on most CPU and GPU and will soon support TPU and other deep learning-specific ASIC.
🔑 Secure. Everything runs locally on your hardware.
✨ Leveraging the best optimization techniques. There are many inference techniques such as deep learning compilers, 🆕quantization or half precision🆕, and soon sparsity and distillation, which are all meant to optimize the way your AI models run on your hardware.
Benchmarks
We have tested nebullvm
on popular AI models and hardware from leading vendors.
The table below shows the inference speedup provided by nebullvm
. The speedup is calculated as the response time of the unoptimized model divided by the response time of the accelerated model, as an average over 100 experiments. As an example, if the response time of an unoptimized model was on average 600 milliseconds and after nebullvm
optimization only 240 milliseconds, the resulting speedup is 2.5x times, meaning 150% faster inference.
A complete overview of the experiment and findings can be found on this page.
M1 Pro | Intel Xeon | AMD EPYC | Nvidia T4 | |
---|---|---|---|---|
EfficientNetB0 | 23.3x | 3.5x | 2.7x | 1.3x |
EfficientNetB2 | 19.6x | 2.8x | 1.5x | 2.7x |
EfficientNetB6 | 19.8x | 2.4x | 2.5x | 1.7x |
Resnet18 | 1.2x | 1.9x | 1.7x | 7.3x |
Resnet152 | 1.3x | 2.1x | 1.5x | 2.5x |
SqueezeNet | 1.9x | 2.7x | 2.0x | 1.3x |
Convnext tiny | 3.2x | 1.3x | 1.8x | 5.0x |
Convnext large | 3.2x | 1.1x | 1.6x | 4.6x |
GPT2 - 10 tokens | 2.8x | 3.2x | 2.8x | 3.8x |
GPT2 - 1024 tokens | - | 1.7x | 1.9x | 1.4x |
Bert - 8 tokens | 6.4x | 2.9x | 4.8x | 4.1x |
Bert - 512 tokens | 1.8x | 1.3x | 1.6x | 3.1x |
____________________ | ____________ | ____________ | ____________ | ____________ |
Overall, the library provides great results, with more than 2x acceleration in most cases and around 20x in a few applications. We can also observe that acceleration varies greatly across different hardware-model couplings, so we suggest you test nebullvm
on your model and hardware to assess its full potential. You can find the instructions below.
Besides, across all scenarios, nebullvm
is very helpful for its ease of use, allowing you to take advantage of inference optimization techniques without having to spend hours studying, testing and debugging these technologies.
How the New Nebullvm API Works
With the latest release, nebullvm
has a new API and can be deployed in two ways.
Option A: 2-10x acceleration, NO performance loss
If you choose this option, nebullvm
will test multiple deep learning compilers (TensorRT, OpenVINO, ONNX Runtime, etc.) and identify the optimal way to compile your model on your hardware, increasing inference speed by 2-10 times without affecting the performance of your model.
Option B: 2-30x acceleration, supervised performance loss
Nebullvm
is capable of speeding up inference by much more than 10 times in case you are willing to sacrifice a fraction of your model's performance. If you specify how much performance loss you are willing to sustain, nebullvm
will push your model's response time to its limits by identifying the best possible blend of state-of-the-art inference optimization techniques, such as deep learning compilers, distillation, quantization, half precision, sparsity, etc.
Performance monitoring is accomplished using the perf_loss_ths
(performance loss threshold), and the perf_metric
for performance estimation.
When a predefined metric (e.g. "accuracy"
) or a custom metric is passed as the perf_metric argument, the value of perf_loss_ths will be used as the maximum acceptable loss for the given metric evaluated on your datasets (Option B.1).
When no perf_metric
is provided as input, nebullvm
calculates the performance loss using the default precision
function. If the dataset
is provided, the precision
will be calculated on 100 sampled data (option B.2). Otherwise, the data will be randomly generated from the metadata provided as input, i.e. input_sizes
and batch_size
(option B.3).
Check out the main GitHub readme if you want to take a look at nebullvm
's performance and benchmarks, tutorials and notebooks on how to implement nebullvm
with ease. And please leave a ⭐ if you enjoy the project and join the Discord community where we chat about nebullvm
and AI optimization.
New Features and Bug Fixes
New features
- Implemented quantization or half precision optimization techniques
- Added support for models in the ONNX framework
- Improved performance of Microsoft ONNX Runtime with transformers
- Implemented
nebullvm
into Jina's amazing Clip-as-a-Service library for performance boost ( coming soon) - Accelerated library installation
- Refactored the code to include support for datasets as an API
- Released new benchmarks, notebooks and tutorials that can be found on the github readme
Bug fixing
- Fixed bug related to Intel OpenVINO applied to dynamic shapes. Thanks @kartikeyporwal for the support!
- Fixed bug with model storage.
- Fixed bug causing issues with NVIDIA TensorRT output. Thanks @UnibsMatt for identifying the problem.
Contributors
- @morgoth95 🥳
- @emilecourthoud 🚀
- @kartikeyporwal 🥇
- @aurimgg 🚗
v0.2.2
nebullvm 0.2.2 Release Notes
The nebullvm 0.2.2 is minor release fixing some bugs.
New Features
- Allow the user to select the maximum number of CPU-threads per model to use during optimization and inference.
Bug fixed
- Fix bug in ONNXRuntime InferenceLearner
Contributors
- Diego Fiori (@morgoth95)