- Prune Once for All: Sparse Pre-Trained Language Models (Nov 2021)
- Faster, Easier Optimization with Intel® Neural Compressor (Nov 2021)
- Intel® Neural Compressor: A Scalable Quantization Tool for ONNX Models (Oct 2021)
- A "Double Play" for MLPerf™ Inference Performance Gains with 3rd Generation Intel® Xeon® Scalable Processors (Sep 2021)
- Optimize TensorFlow Pre-trained Model for Inference (Jun 2021)
- 3D Digital Face Reconstruction Solution enabled by 3rd Gen Intel® Xeon® Scalable Processors (Apr 2021)
- Accelerating Alibaba Transformer model performance with 3rd Gen Intel® Xeon® Scalable Processors (Ice Lake) and Intel® Deep Learning Boost (Apr 2021)
- MLPerf™ Performance Gains Abound with latest 3rd Generation Intel® Xeon® Scalable Processors (Apr 2021)
- Using Low-Precision Optimizations for High-Performance DL Inference Applications (Apr 2021)
- Quantization support for ONNX using LPOT (Low precision optimization tool) (Mar 2021)
- DL Boost Quantization with CERN's 3D-GANs model (Feb 2021)
- Reduced Precision Strategies for Deep Learning: 3DGAN Use Case - presentation on 4th IML Machine Learning Workshop (Oct 2020)
- Intel Neural Compressor (Sep 2020)