Natural language processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 17,913 public repositories matching this topic...
-
Updated
Feb 8, 2022 - Python
-
Updated
Feb 26, 2022 - Python
-
Updated
Feb 27, 2022 - Python
-
Updated
Mar 1, 2022 - Python
-
Updated
Jun 12, 2017
At Jina, we build crazy fun things using Jina and one such example is the meme search.
Meme search allows you to upload a meme of your choice and renders many similar memes to your meme !.
If you are interested in getting your hands on coding with Jina and creating something, this is the issue for you.
Idea:
Create a Pet image classification!.
-
Updated
Mar 1, 2022 - Python
In gensim/models/fasttext.py:
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 supervi
-
Updated
Dec 7, 2021
-
Updated
Dec 30, 2021
Is your feature request related to a problem? Please describe.
I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP models conduct regression rather than classification, so binary metrics are not r
-
Updated
Feb 27, 2022 - Python
-
Updated
Dec 24, 2021 - Jupyter Notebook
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict
command opens the file and reads lines for the Predictor
. This fails when it tries to load data from my compressed files.
Rather than simply caching nltk_data
until the cache expires and it's forced to re-download the entire nltk_data
, we should perform a check on the index.xml
which refreshes the cache if it differs from some previous cache.
I would advise doing this in the same way that it's done for requirements.txt
:
https://github.com/nltk/nltk/blob/59aa3fb88c04d6151f2409b31dcfe0f332b0c9ca/.github/wor
-
Updated
Jul 25, 2021 - Jupyter Notebook
-
Updated
Feb 28, 2022 - JavaScript
-
Updated
Dec 22, 2020 - Python
-
Updated
Jul 1, 2021 - Python
-
Updated
Feb 28, 2022 - TypeScript
-
Updated
Feb 28, 2022 - Java
-
Updated
Jul 6, 2021
-
Updated
Mar 1, 2022 - Python
-
Updated
Nov 2, 2021 - Python
-
Updated
Oct 22, 2020
-
Updated
Jan 24, 2022 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia
Hello,
The code says that it will add compatibility for Postponed Evaluation of Annotations (PEP 563) when Python 3.9 is released (which already happened on 2020.10.5). Is there any plan to complete this?
https://github.com/huggingface/transformers/blob/2c2a31ffbcfe03339b1721348781aac4fc05bc5e/src/transformers/hf_argparser.py#L85-L90