Natural language processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 18,089 public repositories matching this topic...
-
Updated
Mar 19, 2022 - Python
-
Updated
Feb 26, 2022 - Python
-
Updated
Mar 19, 2022 - Python
-
Updated
Mar 19, 2022 - Python
-
Updated
Jun 12, 2017
Problem: It is pretty challenging to find resource material and valuable articles, videos and such, and we spend a lot of time searching and finding the appropriate resource for us.
Proposed solution: Faceted search can come a long way when looking for a quick way to find a solution designed for our needs. Ratings on the resource can help us select the best solution based on our search
-
Updated
Mar 18, 2022 - Python
In gensim/models/fasttext.py:
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervi
-
Updated
Dec 7, 2021
-
Updated
Mar 11, 2022
-
Updated
Mar 20, 2022 - Python
-
Updated
Dec 24, 2021 - Jupyter Notebook
-
Updated
Mar 18, 2022 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict
command opens the file and reads lines for the Predictor
. This fails when it tries to load data from my compressed files.
Rather than simply caching nltk_data
until the cache expires and it's forced to re-download the entire nltk_data
, we should perform a check on the index.xml
which refreshes the cache if it differs from some previous cache.
I would advise doing this in the same way that it's done for requirements.txt
:
https://github.com/nltk/nltk/blob/59aa3fb88c04d6151f2409b31dcfe0f332b0c9ca/.github/wor
-
Updated
Jul 25, 2021 - Jupyter Notebook
-
Updated
Mar 18, 2022 - JavaScript
-
Updated
Dec 22, 2020 - Python
-
Updated
Jul 1, 2021 - Python
-
Updated
Mar 19, 2022 - TypeScript
-
Updated
Mar 20, 2022 - Java
-
Updated
Jul 6, 2021
-
Updated
Mar 19, 2022 - Python
-
Updated
Nov 2, 2021 - Python
-
Updated
Oct 22, 2020
-
Updated
Jan 24, 2022 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia
First good issue
A current error is that a user forwards a batched tensor of
input_ids
that include a padding token, e.g.input_ids = torch.tensor([["hello", "this", "is", "a", "long", "string"], ["hello", "<pad>", "<pad>", "<pad>", "<pad>"]]
In this case, the
attention_mask
should be provided as well. Otherwise the output hidden_states will be incorrectly computed. This is