Natural language processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 18,317 public repositories matching this topic...
-
Updated
Mar 19, 2022 - Python
-
Updated
Feb 26, 2022 - Python
-
Updated
Apr 9, 2022 - Python
-
Updated
Apr 8, 2022 - Python
-
Updated
Jun 12, 2017
-
Updated
Apr 9, 2022 - Python
In gensim/models/fasttext.py:
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervi
-
Updated
Apr 9, 2022
Describe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
Steps to reproduce the bug
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("
-
Updated
Mar 23, 2022
-
Updated
Apr 1, 2022 - Jupyter Notebook
-
Updated
Apr 10, 2022 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict
command opens the file and reads lines for the Predictor
. This fails when it tries to load data from my compressed files.
Rather than simply caching nltk_data
until the cache expires and it's forced to re-download the entire nltk_data
, we should perform a check on the index.xml
which refreshes the cache if it differs from some previous cache.
I would advise doing this in the same way that it's done for requirements.txt
:
https://github.com/nltk/nltk/blob/59aa3fb88c04d6151f2409b31dcfe0f332b0c9ca/.github/wor
-
Updated
Jul 25, 2021 - Jupyter Notebook
-
Updated
Apr 9, 2022 - JavaScript
-
Updated
Dec 22, 2020 - Python
-
Updated
Apr 9, 2022 - TypeScript
-
Updated
Apr 7, 2022 - Java
-
Updated
Jul 6, 2021
-
Updated
Apr 6, 2022 - Python
-
Updated
Nov 2, 2021 - Python
-
Updated
Oct 22, 2020
-
Updated
Mar 30, 2022 - Python
-
Updated
Apr 7, 2022 - Python
-
Updated
Apr 3, 2022 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia
Several tokenizers currently have no associated tests. I think that adding the test file for one of these tokenizers could be a very good way to make a first contribution to transformers.
Tokenizers concerned
not yet claimed
LED
RemBert
Splinter
MobileBert
ConvBert
RetriBert
claimed