pytorch
Here are 19,540 public repositories matching this topic...
-
Updated
Jan 2, 2022 - Python
-
Updated
Dec 3, 2021 - Jupyter Notebook
-
Updated
Oct 16, 2021 - Python
-
Updated
Dec 31, 2021 - Jupyter Notebook
-
Updated
Jan 5, 2022 - Python
I want to train a detector based on object365 dataset, but object365 is pretty large, and caused out of memory error in my computer.
I want to split the annotation file to 10, such as ann1,ann2,...ann10, then build 10 datasets and concatenate them, but I'm not sure whether it's
gonna work or not.
Any better suggestion?
-
Updated
Jan 5, 2022 - JavaScript
🚀 Feature
When evaluation trainer.validate(verbose=True)
(or test
) finishes, we print a dictionary with the results obtained
--------------------------------------------------------------------------------
DATALOADER:0 TEST RESULTS
{'test_loss': -3.4134674072265625}
--------------------------------------------------------------------------------
-
Updated
Dec 30, 2021 - Python
-
Updated
Dec 30, 2021 - Python
-
Updated
Oct 25, 2021 - Jupyter Notebook
-
Updated
Jan 3, 2022 - Python
-
Updated
Jan 4, 2022 - Python
Change tensor.data
to tensor.detach()
due to
pytorch/pytorch#6990 (comment)
tensor.detach()
is more robust than tensor.data
.
-
Updated
Jan 5, 2022 - C++
-
Updated
Jan 4, 2022 - Python
🚀 Feature
Motivation
paper "LEARNING TO REPRESENT PROGRAMS WITH GRAPHS" which encode computer programs as graphs, with rich semantic information, however, most code implementation on this dataset VarMisuse is based on TensorFlow, like [tf-gnn-samples](https://github.com/microsof
-
Updated
Dec 30, 2021
-
Updated
Aug 30, 2021 - Jupyter Notebook
-
Updated
Jan 5, 2022 - Python
-
Updated
Jan 5, 2022 - Python
New Operator
Describe the operator
Why is this operator necessary? What does it accomplish?
This is a frequently used operator in tensorflow/keras
Can this operator be constructed using existing onnx operators?
If so, why not add it as a function?
I don't know.
Is this operator used by any model currently? Which one?
Are you willing to contribute it?
-
Updated
Jan 5, 2022 - Python
-
Updated
Jan 5, 2022 - Python
-
Updated
Dec 27, 2021 - Python
-
Updated
Jan 5, 2022 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict
command opens the file and reads lines for the Predictor
. This fails when it tries to load data from my compressed files.
-
Updated
Jul 25, 2021 - Jupyter Notebook
Improve this page
Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."
Fast Tokenizer for DeBERTA-V3 and mDeBERTa-V3
Motivation
DeBERTa V3 is an improved version of DeBERTa. With the V3 version, the authors also released a multilingual model "mDeBERTa-base" that outperforms XLM-R-base. However, DeBERTa V3 currently lacks a FastTokenizer implementation which makes it impossible to use with some of the example scripts (They require a Fa