-
Updated
May 9, 2022 - Python
natural-language-processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 9,821 public repositories matching this topic...
-
Updated
May 8, 2022 - Python
-
Updated
Apr 9, 2022 - Python
-
Updated
May 1, 2022
-
Updated
Jun 12, 2017
-
Updated
May 13, 2022 - Python
Although the results look nice and ideal in all TensorFlow plots and are consistent across all frameworks, there is a small difference (more of a consistency issue). The result training loss/accuracy plots look like they are sampling on a lesser number of points. It looks more straight and smooth and less wiggly as compared to PyTorch or MXNet.
It can be clearly seen in chapter 6([CNN Lenet](ht
Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
Steps to reproduce the bug
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("
In gensim/models/fasttext.py:
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervi
-
Updated
May 15, 2022 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict
command opens the file and reads lines for the Predictor
. This fails when it tries to load data from my compressed files.
Checking the Python files in NLTK with "python -m doctest" reveals that many tests are failing. In many cases, the failures are just cosmetic discrepancies between the expected and the actual output, such as missing a blank line, or unescaped linebreaks. Other cases may be real bugs.
If these failures could be avoided, it would become possible to improve CI by running "python -m doctest" each t
-
Updated
Jul 25, 2021 - Jupyter Notebook
-
Updated
Dec 22, 2020 - Python
-
Updated
Apr 10, 2022 - HTML
-
Updated
May 6, 2022 - Java
-
Updated
May 15, 2022 - Python
-
Updated
Jul 9, 2021 - Python
-
Updated
Mar 24, 2022 - Python
-
Updated
May 15, 2022 - Java
Some ideas for figures to add to the PPT
- Linear regression, single-layer neural network
- Multilayer Perceptron with hidden layer
- Backpropagation
- Batch Normalization and alternatives
- Computational Graphs
- Dropout
- CNN - padding, stride, pooling,...
- LeNet
- AlexNet
- VGG
- GoogleNet
- ResNet
- DenseNet
- Memory Net
-
Updated
May 13, 2022 - Python
-
Updated
Apr 3, 2022 - Python
-
Updated
May 15, 2022 - Python
-
Updated
Sep 7, 2021 - Python
-
Updated
May 13, 2022 - C++
Created by Alan Turing
- Wikipedia
- Wikipedia
Change
tensor.data
totensor.detach()
due topytorch/pytorch#6990 (comment)
tensor.detach()
is more robust thantensor.data
.