-
Updated
Aug 8, 2020 - C
#
quantization
Here are 179 public repositories matching this topic...
Lossy PNG compressor — pngquant command based on libimagequant library
c
palette
quality
png
png-compression
conversion
smaller
stdin
image-optimization
quantization
pngquant
Neural Network Distiller by Intel AI Lab: a Python package for neural network compression research. https://nervanasystems.github.io/distiller
deep-neural-networks
jupyter-notebook
pytorch
regularization
pruning
quantization
group-lasso
distillation
onnx
truncated-svd
network-compression
pruning-structures
early-exit
automl-for-compression
-
Updated
Jul 23, 2020 - Jupyter Notebook
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
-
Updated
Aug 30, 2020 - Python
Base pretrained models and datasets in pytorch (MNIST, SVHN, CIFAR10, CIFAR100, STL10, AlexNet, VGG16, VGG19, ResNet, Inception, SqueezeNet)
-
Updated
May 14, 2020 - Python
A toolkit to optimize ML models for deployment for Keras and TensorFlow, including quantization and pruning.
machine-learning
sparsity
compression
deep-learning
tensorflow
optimization
keras
ml
pruning
quantization
model-compression
quantized-training
quantized-neural-networks
quantized-networks
-
Updated
Sep 2, 2020 - Python
model compression based on pytorch (1、quantization: 16/8/4/2 bits(dorefa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、ternary/binary value(twn/bnn/xnor-net);2、 pruning: normal、regular and group convolutional channel pruning;3、 group convolution structure;4、batch-normalization folding for quantization)
pytorch
pruning
convolutional-networks
quantization
xnor-net
model-compression
bnn
neuromorphic-computing
group-convolution
network-in-network
dorefa
twn
network-slimming
integer-arithmetic-only
quan-batch-normalization-folding
quantization-aware-training
-
Updated
Jul 27, 2020 - Python
Trainable models and NN optimization tools
sparsity
computer-vision
deep-learning
tensorflow
detection
pytorch
text-recognition
ssd
segmentation
face-recognition
text-detection
quantization
super-resolution
openvino
neural-networks-compression
-
Updated
Sep 1, 2020 - Python
A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.
tensorflow
pytorch
hyperparameter-optimization
awesome-list
quantization
nas
automl
model-compression
neural-architecture-search
meta-learning
architecture-search
quantized-training
model-acceleration
automated-feature-engineering
quantized-neural-network
-
Updated
Apr 16, 2020
Embedded and mobile deep learning research resources
deep-neural-networks
deep-learning
inference
pruning
quantization
neural-network-compression
mobile-deep-learning
embedded-ai
efficient-neural-networks
mobile-ai
mobile-inference
-
Updated
Oct 25, 2019
PaddleSlim is an open-source library for deep model compression and architecture search.
pruning
quantization
nas
knowledge-distillation
evolution-strategy
model-compression
neural-architecture-search
hyperparameter-search
autodl
-
Updated
Sep 3, 2020 - Python
An Open-Source Package for Deep Learning to Hash (DeepHash)
-
Updated
Nov 24, 2019 - Python
AIMET is a library that provides advanced quantization and compression techniques for trained neural network models.
open-source
machine-learning
opensource
deep-neural-networks
compression
deep-learning
pruning
quantization
auto-ml
network-quantization
network-compression
-
Updated
Sep 3, 2020 - Python
Palette quantization library that powers pngquant and other PNG optimizers
palette
quality
visual-studio
conversion
callback
minification
image-optimization
quantization
rgba-pixels
palette-generation
pixel-array
image-pixels
pngquant
-
Updated
Aug 15, 2020 - C
Must-read papers on deep learning to hash (DeepHash)
-
Updated
May 8, 2019
Brevitas: quantization-aware training in Pytorch
-
Updated
Sep 2, 2020 - Python
A repository that shares tuning results of trained models generated by Tensorflow / Keras. Post-training quantization (Weight Quantization, Integer Quantization, Full Integer Quantization, Float16 Quantization), Quantization-aware training. OpenVINO. CoreML. Tensorflow.js
tensorflow
dbface
quantization
super-resolution
bert
coreml
posenet
sound-classification
tensorflow-lite
deeplabv3
tensorflowjs
openvino
edgetpu
mobilenetv3
mediapipe
blazeface
efficientdet
facemesh
objectron
blazepose
-
Updated
Sep 1, 2020 - Python
[CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision
-
Updated
Jul 1, 2020 - Python
QKeras: a quantization deep learning library for Tensorflow Keras
machine-learning
fpga
deep-learning
tensorflow
accelerator
keras
quantization
hardware-acceleration
fpga-accelerator
quantized-neural-networks
asic-design
quantized-networks
-
Updated
Aug 25, 2020 - Python
LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks
-
Updated
Mar 24, 2019 - Python
Infrastructures™ for Machine Learning Training/Inference in Production.
kubernetes
machine-learning
apache-spark
deep-learning
artificial-intelligence
awesome-list
pruning
quantization
knowledge-distillation
deep-learning-framework
model-compression
apache-arrow
federated-learning
machine-learning-systems
apache-mesos
-
Updated
May 24, 2019
Ternary Gradients to Reduce Communication in Distributed Deep Learning (TensorFlow)
-
Updated
Nov 19, 2018 - Python
A PyTorch implementation of "Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights"
-
Updated
Mar 8, 2020 - Python
Awesome machine learning model compression research papers, tools, and learning material.
-
Updated
May 10, 2020
Quantization of Convolutional Neural networks.
-
Updated
Nov 6, 2019 - Python
Graph Transforms to Quantize and Retrain Deep Neural Nets in TensorFlow.
-
Updated
Dec 9, 2019 - Python
PyTorch Model Compression
-
Updated
Aug 31, 2020 - Python
A curated list of awesome edge machine learning resources, including research papers, inference engines, challenges, books, meetups and others.
iot
edge
awesome-list
pruning
quantization
auto-ml
edge-machine-learning
federated-learning
embedded-machine-learning
mobile-machine-learning
efficient-architectures
edge-deep-learning
-
Updated
Apr 18, 2020 - Python
AlexKoff88
commented
Aug 21, 2020
The idea is to have a more advanced Filter Pruning method to be able to show SOTA results in model compression/optimization.
I suggest reimplementing the method from here: https://github.com/cmu-enyac/LeGR and reproduce baseline results for MobileNet v2 on CIFAR100 as the first step.
Collections of model quantization algorithms
-
Updated
Jul 22, 2020 - Python
Improve this page
Add a description, image, and links to the quantization topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the quantization topic, visit your repo's landing page and select "manage topics."
Blueoil inference doesn't work with python2 any longer.
We need to update the pip packages for inference but enum34 package would be the block for some recent packages.
Now the CI has
pip uninstall enum34
in the pipeline for the test backward compatibility, but this is no longer required for future.Related to
blue-oil/blueoil#1139