PyTorch

PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook's AI Research lab.
Here are 19,758 public repositories matching this topic...
-
Updated
Jan 19, 2022 - Python
-
Updated
Jan 9, 2022 - Jupyter Notebook
-
Updated
Oct 16, 2021 - Python
-
Updated
Jan 19, 2022 - Jupyter Notebook
-
Updated
Jan 20, 2022 - Python
-
Updated
Jan 15, 2022 - JavaScript
I want to train a detector based on object365 dataset, but object365 is pretty large, and caused out of memory error in my computer.
I want to split the annotation file to 10, such as ann1,ann2,...ann10, then build 10 datasets and concatenate them, but I'm not sure whether it's
gonna work or not.
Any better suggestion?
-
Updated
Jan 19, 2022 - JavaScript
basically a copy of PyTorchLightning/metrics#744 which was addressed in PyTorchLightning/metrics#749
🚀 Feature
see suggestion in PyTorchLightning/metrics#740 (comment)
Motivation
most of the deprecations in TM are meant to users not developers
Pitch
Replace DeprecationWarning
-
Updated
Jan 19, 2022 - Python
-
Updated
Jan 14, 2022 - Python
-
Updated
Jan 19, 2022 - Python
-
Updated
Oct 25, 2021 - Jupyter Notebook
-
Updated
Jan 20, 2022 - Python
Change tensor.data
to tensor.detach()
due to
pytorch/pytorch#6990 (comment)
tensor.detach()
is more robust than tensor.data
.
-
Updated
Jan 19, 2022 - C++
🚀 Feature
Motivation
paper "LEARNING TO REPRESENT PROGRAMS WITH GRAPHS" which encode computer programs as graphs, with rich semantic information, however, most code implementation on this dataset VarMisuse is based on TensorFlow, like [tf-gnn-samples](https://github.com/microsof
-
Updated
Jan 20, 2022 - Python
-
Updated
Dec 30, 2021
-
Updated
Jan 20, 2022 - Python
-
Updated
Aug 30, 2021 - Jupyter Notebook
-
Updated
Jan 19, 2022 - Python
New Operator
Describe the operator
Why is this operator necessary? What does it accomplish?
This is a frequently used operator in tensorflow/keras
Can this operator be constructed using existing onnx operators?
If so, why not add it as a function?
I don't know.
Is this operator used by any model currently? Which one?
Are you willing to contribute it?
-
Updated
Jan 20, 2022 - Python
-
Updated
Jan 20, 2022 - Python
-
Updated
Jan 20, 2022 - Python
-
Updated
Jan 20, 2022 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict
command opens the file and reads lines for the Predictor
. This fails when it tries to load data from my compressed files.
-
Updated
Jul 25, 2021 - Jupyter Notebook
Created by Facebook's AI Research lab (FAIR)
Released September 2016
Latest release about 1 month ago
- Repository
- pytorch/pytorch
- Website
- pytorch.org
- Wikipedia
- Wikipedia
Fast Tokenizer for DeBERTA-V3 and mDeBERTa-V3
Motivation
DeBERTa V3 is an improved version of DeBERTa. With the V3 version, the authors also released a multilingual model "mDeBERTa-base" that outperforms XLM-R-base. However, DeBERTa V3 currently lacks a FastTokenizer implementation which makes it impossible to use with some of the example scripts (They require a Fa