-
Updated
Feb 26, 2021 - Jupyter Notebook
#
pruning
Here are 285 public repositories matching this topic...
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://intellabs.github.io/distiller
deep-neural-networks
jupyter-notebook
pytorch
regularization
pruning
quantization
group-lasso
distillation
onnx
truncated-svd
network-compression
pruning-structures
early-exit
automl-for-compression
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape
pytorch
pruning
convolutional-networks
quantization
xnor-net
tensorrt
model-compression
bnn
neuromorphic-computing
group-convolution
onnx
network-in-network
tensorrt-int8-python
dorefa
twn
network-slimming
integer-arithmetic-only
quantization-aware-training
post-training-quantization
batch-normalization-fuse
-
Updated
Oct 6, 2021 - Python
A curated list of neural network pruning resources.
-
Updated
Oct 31, 2021
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
machine-learning
sparsity
compression
deep-learning
tensorflow
optimization
keras
ml
pruning
quantization
model-compression
quantized-training
quantized-neural-networks
quantized-networks
-
Updated
Apr 6, 2022 - Python
PaddleSlim is an open-source library for deep model compression and architecture search.
pruning
quantization
nas
knowledge-distillation
evolution-strategy
model-compression
neural-architecture-search
hyperparameter-search
autodl
-
Updated
Apr 8, 2022 - Python
Libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models
nlp
sparsity
tensorflow
keras
pytorch
deep-learning-algorithms
image-classification
deep-learning-library
pruning
object-detection
automl
computer-vision-algorithms
onnx
deep-learning-models
sparsification
pruning-algorithms
smaller-models
model-sparsification
sparsification-recipes
recipe-driven-approaches
-
Updated
Apr 8, 2022 - Python
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
-
Updated
Jul 12, 2019 - Python
Embedded and mobile deep learning research resources
deep-neural-networks
deep-learning
inference
pruning
quantization
neural-network-compression
mobile-deep-learning
embedded-ai
efficient-neural-networks
mobile-ai
mobile-inference
-
Updated
Mar 28, 2022
mobilev2-yolov5s剪枝、蒸馏,支持ncnn,tensorRT部署。ultra-light but better performence!
-
Updated
Jul 10, 2021 - Jupyter Notebook
OpenMMLab Model Compression Toolbox and Benchmark.
detection
pytorch
classification
segmentation
pruning
darts
nas
knowledge-distillation
spos
autoslim
-
Updated
Apr 6, 2022 - Python
Pruning channels for model acceleration
-
Updated
Mar 24, 2022 - Python
Neural network inference engine that delivers GPU-class performance for sparsified models on CPUs
nlp
computer-vision
tensorflow
ml
inference
pytorch
machinelearning
pruning
object-detection
pretrained-models
quantization
auto-ml
cpus
onnx
yolov3
sparsification
cpu-inference-api
deepsparse-engine
sparsified-models
sparsification-recipe
-
Updated
Apr 7, 2022 - Python
Neural Network Compression Framework for enhanced OpenVINO™ inference
nlp
sparsity
compression
tensorflow
transformers
pytorch
classification
pruning
object-detection
quantization
semantic-segmentation
bert
hawq
mmdetection
mixed-precision-training
quantization-aware-training
-
Updated
Apr 5, 2022 - Python
Config driven, easy backup cli for restic.
-
Updated
Apr 7, 2022 - Go
Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration (CVPR 2019 Oral)
-
Updated
Jun 17, 2021 - Python
YOLO ModelCompression MultidatasetTraining
-
Updated
Dec 7, 2021 - Python
Pruning and other network surgery for trained Keras models.
-
Updated
Apr 1, 2022 - Python
avishreekh
commented
May 7, 2021
We also need to benchmark the Lottery-tickets Pruning algorithm and the Quantization algorithms. The models used for this would be the student networks discussed in #105 (ResNet18, MobileNet v2, Quantization v2).
Pruning (benchmark upto 40, 50 and 60 % pruned weights)
- Lottery Tickets
Quantization
- Static
- QAT
Open
Benchmarking KD
Open
Update README.rst
10
Soft Filter Pruning for Accelerating Deep Convolutional Neural Networks
-
Updated
Oct 2, 2019 - Python
Intel® Neural Compressor (formerly known as Intel® Low Precision Optimization Tool), targeting to provide unified APIs for network compression technologies, such as low precision quantization, sparsity, pruning, knowledge distillation, across different deep learning frameworks to pursue optimal inference performance.
sparsity
deep-learning
pruning
quantization
knowledge-distillation
auto-tuning
low-precision
quantization-aware-training
post-training-quantization
-
Updated
Apr 2, 2022 - Python
Reference ImageNet implementation of SelecSLS CNN architecture proposed in the SIGGRAPH 2020 paper "XNect: Real-time Multi-Person 3D Motion Capture with a Single RGB Camera". The repository also includes code for pruning the model based on implicit sparsity emerging from adaptive gradient descent methods, as detailed in the CVPR 2019 paper "On implicit filter level sparsity in Convolutional Neural Networks".
sparsity
deep-learning
efficient
cnn
pytorch
imagenet
pruning
siggraph
pytorch-implementation
cvpr2019
efficient-architectures
-
Updated
Jul 23, 2020 - Python
Infrastructures™ for Machine Learning Training/Inference in Production.
kubernetes
machine-learning
apache-spark
deep-learning
artificial-intelligence
awesome-list
pruning
quantization
knowledge-distillation
deep-learning-framework
model-compression
apache-arrow
federated-learning
machine-learning-systems
apache-mesos
-
Updated
May 24, 2019
Awesome machine learning model compression research papers, tools, and learning material.
-
Updated
Jan 17, 2022
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework.
-
Updated
Apr 6, 2022 - Python
This repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" by Jonathan Frankle and Michael Carbin that can be easily adapted to any model/dataset.
python
deep-learning
pytorch
pruning
lottery
network-pruning
pytorch-implementation
iclr2019
lottery-ticket-hypothesis
winning-ticket
-
Updated
Mar 12, 2022 - Python
Observations and notes to understand the workings of neural network models and other thought experiments using Tensorflow
neural-network
generative-adversarial-network
generative-model
pruning
optimal-brain-damage
uncertainty-neural-networks
-
Updated
Oct 27, 2019 - Jupyter Notebook
Pytorch implementation of our paper accepted by CVPR 2020 (Oral) -- HRank: Filter Pruning using High-Rank Feature Map
-
Updated
Feb 11, 2021 - Python
Improve this page
Add a description, image, and links to the pruning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pruning topic, visit your repo's landing page and select "manage topics."
If you are interested in working on this issue - please indicate via a comment on this issue. It should be possible for us to pair you up with an existing contributor to help you get started.
From a complexity perspective, this ticket is at an easy level.