PyTorch

PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook's AI Research lab.
Here are 21,207 public repositories matching this topic...
-
Updated
Apr 12, 2022 - Python
-
Updated
Apr 8, 2022 - Jupyter Notebook
-
Updated
Apr 16, 2022 - Python
-
Updated
Oct 16, 2021 - Python
-
Updated
Apr 16, 2022 - Jupyter Notebook
文本中如果有数字读不出来
https://github.com/open-mmlab/mmdetection/blob/7a9bc498d5cc972171ec4f7332afcd70bb50e60e/tools/analysis_tools/coco_error_analysis.py#L43
This I believe is for coco format, but I couldn't find any files for plotting precision or precision vs recall chart for pascal voc format.
-
Updated
Mar 13, 2022 - Python
-
Updated
Apr 16, 2022 - JavaScript
🐛 Bug
DeepSpeed raises an internal error when the Trainer
runs on CPU. I imagine they don't support CPU training so we should raise a MisconfigurationException in that case.
To Reproduce
Code
import os
import torch
from torch.utils.data import DataLoader, Dataset
from pytorch_lightning import LightningModule, Trainer
class RandomDataset(Dataset):
de
-
Updated
Apr 9, 2022 - Python
-
Updated
Mar 26, 2022 - Python
-
Updated
Apr 15, 2022 - Python
-
Updated
Mar 25, 2022 - Jupyter Notebook
Change tensor.data
to tensor.detach()
due to
pytorch/pytorch#6990 (comment)
tensor.detach()
is more robust than tensor.data
.
🚀 Feature
Motivation
paper "LEARNING TO REPRESENT PROGRAMS WITH GRAPHS" which encode computer programs as graphs, with rich semantic information, however, most code implementation on this dataset VarMisuse is based on TensorFlow, like [tf-gnn-samples](https://github.com/microsof
-
Updated
Apr 9, 2022 - Python
-
Updated
Apr 16, 2022 - C++
Although the results look nice and ideal in all TensorFlow plots and are consistent across all frameworks, there is a small difference (more of a consistency issue). The result training loss/accuracy plots look like they are sampling on a lesser number of points. It looks more straight and smooth and less wiggly as compared to PyTorch or MXNet.
It can be clearly seen in chapter 6([CNN Lenet](ht
Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
Steps to reproduce the bug
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("
-
Updated
Mar 23, 2022
-
Updated
Aug 30, 2021 - Jupyter Notebook
New Operator
Describe the operator
Why is this operator necessary? What does it accomplish?
This is a frequently used operator in tensorflow/keras
Can this operator be constructed using existing onnx operators?
If so, why not add it as a function?
I don't know.
Is this operator used by any model currently? Which one?
Are you willing to contribute it?
-
Updated
Apr 15, 2022 - Python
-
Updated
Apr 16, 2022 - Python
-
Updated
Apr 16, 2022 - Python
While trying to speedup my single shot detector, the following error comes up. Any way to fix this,
/usr/local/lib/python3.8/dist-packages/nni/compression/pytorch/speedup/jit_translate.py in forward(self, *args)
363
364 def forward(self, *
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict
command opens the file and reads lines for the Predictor
. This fails when it tries to load data from my compressed files.
-
Updated
Jul 25, 2021 - Jupyter Notebook
Created by Facebook's AI Research lab (FAIR)
Released September 2016
Latest release about 1 month ago
- Repository
- pytorch/pytorch
- Website
- pytorch.org
- Wikipedia
- Wikipedia
Several tokenizers currently have no associated tests. I think that adding the test file for one of these tokenizers could be a very good way to make a first contribution to transformers.
Tokenizers concerned
not yet claimed
LED
RemBert
MobileBert
ConvBert
RetriBert
claimed