pytorch
Here are 16,078 public repositories matching this topic...
-
Updated
Jun 14, 2021 - Jupyter Notebook
-
Updated
May 14, 2021 - Python
-
Updated
Jun 16, 2021 - Jupyter Notebook
-
Updated
Jun 10, 2021 - Python
We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.
You can either:
- Suggest a new feature by leaving a comment.
- Vote for a feature request with
👍 or be against with👎 . (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!) - Tell us that
-
Updated
Jun 1, 2021 - Python
-
Updated
Jun 16, 2021 - JavaScript
-
Updated
May 25, 2021 - Jupyter Notebook
🐛 Bug
This is a fairly important bug report that I've been meaning to make for a while.
In general, it is incorrect to try to do testing with a distributed sampler. This is because the distributed sampler is either going to mix in already processed samples or drop samples in order to make the number of batches divide evenly on the number of GPUs.
This is fine when you're doing tra
-
Updated
Jun 16, 2021 - Python
Change tensor.data
to tensor.detach()
due to
pytorch/pytorch#6990 (comment)
tensor.detach()
is more robust than tensor.data
.
-
Updated
Jun 16, 2021 - Python
-
Updated
May 2, 2021
-
Updated
Jun 16, 2021 - C
-
Updated
Jun 15, 2021 - Python
-
Updated
May 16, 2021 - Jupyter Notebook
-
Updated
Jun 16, 2021 - Python
-
Updated
Jun 16, 2021 - Python
-
Updated
Jun 15, 2021 - Python
Bug Report
Is the issue related to model conversion? No
Describe the bug
DynamicQuantizeLinear function op does not have shape inference function defined. In absence of shape inference, function body is used to get the shape inference for the function op and although it works as a fallback option it hurts perf.
Expected behavior
Add shape inference function for DynamicQuan
-
Updated
Jun 15, 2021 - Python
-
Updated
Jun 16, 2021 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict
command opens the file and reads lines for the Predictor
. This fails when it tries to load data from my compressed files.
-
Updated
Jun 16, 2021 - Python
-
Updated
Jun 16, 2021 - Python
-
Updated
Mar 14, 2021 - Jupyter Notebook
-
Updated
May 2, 2021 - Jupyter Notebook
-
Updated
Jun 7, 2021 - Python
Improve this page
Add a description, image, and links to the pytorch topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the pytorch topic, visit your repo's landing page and select "manage topics."
Let's use this Issue to track performance issues and enhancement requests, so it's easier to prioritize the work.
This is for pytorch
transformers
Also I will label it as a
Good Difficult Issue
in case someone is ready for a challenging but rewarding experience of figuring things out. If you do want to take the challenge comment in the corresponding Issue/PR that resonates with you s