PyTorch

PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook's AI Research lab.
Here are 21,116 public repositories matching this topic...
-
Updated
Apr 9, 2022 - Python
-
Updated
Apr 8, 2022 - Jupyter Notebook
-
Updated
Apr 11, 2022 - Python
-
Updated
Oct 16, 2021 - Python
-
Updated
Apr 6, 2022 - Jupyter Notebook
文本中如果有数字读不出来
Description
MMCV has a WandbLoggerHook
(source) that can log metrics with Weights and Biases (W&B) and log saved models, log files, etc. as W&B Artifacts. Given it is part of MMCV, and other MM based repositories use it, I propose to have a dedicated Logger for MMD
-
Updated
Mar 13, 2022 - Python
-
Updated
Apr 11, 2022 - JavaScript
🐛 Bug
DeepSpeed raises an internal error when the Trainer
runs on CPU. I imagine they don't support CPU training so we should raise a MisconfigurationException in that case.
To Reproduce
Code
import os
import torch
from torch.utils.data import DataLoader, Dataset
from pytorch_lightning import LightningModule, Trainer
class RandomDataset(Dataset):
de
-
Updated
Apr 9, 2022 - Python
-
Updated
Mar 26, 2022 - Python
-
Updated
Apr 8, 2022 - Python
-
Updated
Mar 25, 2022 - Jupyter Notebook
Change tensor.data
to tensor.detach()
due to
pytorch/pytorch#6990 (comment)
tensor.detach()
is more robust than tensor.data
.
🚀 Feature
Motivation
paper "LEARNING TO REPRESENT PROGRAMS WITH GRAPHS" which encode computer programs as graphs, with rich semantic information, however, most code implementation on this dataset VarMisuse is based on TensorFlow, like [tf-gnn-samples](https://github.com/microsof
-
Updated
Apr 9, 2022 - Python
-
Updated
Apr 11, 2022 - C++
Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
Steps to reproduce the bug
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("
Although the results look nice and ideal in all TensorFlow plots and are consistent across all frameworks, there is a small difference (more of a consistency issue). The result training loss/accuracy plots look like they are sampling on a lesser number of points. It looks more straight and smooth and less wiggly as compared to PyTorch or MXNet.
It can be clearly seen in chapter 6([CNN Lenet](ht
-
Updated
Mar 23, 2022
-
Updated
Aug 30, 2021 - Jupyter Notebook
New Operator
Describe the operator
Why is this operator necessary? What does it accomplish?
This is a frequently used operator in tensorflow/keras
Can this operator be constructed using existing onnx operators?
If so, why not add it as a function?
I don't know.
Is this operator used by any model currently? Which one?
Are you willing to contribute it?
-
Updated
Apr 7, 2022 - Python
-
Updated
Apr 7, 2022 - Python
-
Updated
Apr 11, 2022 - Python
While trying to speedup my single shot detector, the following error comes up. Any way to fix this,
/usr/local/lib/python3.8/dist-packages/nni/compression/pytorch/speedup/jit_translate.py in forward(self, *args)
363
364 def forward(self, *
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict
command opens the file and reads lines for the Predictor
. This fails when it tries to load data from my compressed files.
-
Updated
Jul 25, 2021 - Jupyter Notebook
Created by Facebook's AI Research lab (FAIR)
Released September 2016
Latest release about 1 month ago
- Repository
- pytorch/pytorch
- Website
- pytorch.org
- Wikipedia
- Wikipedia
Several tokenizers currently have no associated tests. I think that adding the test file for one of these tokenizers could be a very good way to make a first contribution to transformers.
Tokenizers concerned
not yet claimed
LED
RemBert
Splinter
MobileBert
ConvBert
RetriBert
claimed