fasttext
Here are 253 public repositories matching this topic...
-
Updated
Dec 18, 2019 - Python
-
Updated
Jan 28, 2020 - Jupyter Notebook
-
Updated
Apr 8, 2019 - Python
-
Updated
May 13, 2020 - Python
If I have a word, how do i get top k words closest to that given word. As far as i understand, there is a way to get it from cpp code but I can't find anything in the python library.
Something similar to what gensim word2vec implementation has:
model.most_similar(positive=[your_word_vector], topn=1))
-
Updated
Jan 1, 2019 - Python
The documentation in DJL was originally written with the expectation that users are reasonably familiar with deep learning. So, it does not go out of the way to define and explain some of the key concepts. To help users who are newer to deep learning, we created a [documentation convention](https://github.com/awslabs/djl/blob/master/docs/development/development_guideline.md#documentation-conventio
-
Updated
Apr 3, 2020 - Python
-
Updated
Jan 5, 2019 - Python
def GenerateNgrams(words, ngrams):
nglist = []
for ng in ngrams:
for word in words:
nglist.extend([word[n:n+ng] for n in range(len(word)-ng+1)])
return ngli
maybe it should like following
def GenerateNgrams(words, ngrams):
nglist = []
for ng in ngrams:
nglist.extend(''.join([words[n:n+ng]) for n in range(len(words)-ng+1)])
official Python binding from the fastText repository: https://github.com/facebookresearch/fastText/tree/master/python , open this website , only little example , compare pyfasttext document , I cannot understand official document , have anybody know how to understand official fasttext example , In my mind pyfasttext document better than official fasttext document
-
Updated
Feb 20, 2019 - Python
-
Updated
Feb 26, 2020 - Jupyter Notebook
-
Updated
Jan 22, 2019 - C++
fastText supervised model does not take into account of the document and words representation, it just learns bag of words and labels.
embeddings are computed only on the relation word->label. it would be interesting to learn jointly the semantic relation label<->document<->word<->context.
for now it is only possible to pre-train word embeddings and then use them as initial vectors for the clas
-
Updated
Mar 2, 2020 - Python
-
Updated
Mar 24, 2020 - Python
-
Updated
Apr 23, 2020
-
Updated
Jun 19, 2018 - Python
-
Updated
Mar 9, 2020 - HTML
-
Updated
Mar 21, 2019 - Python
-
Updated
Mar 25, 2019 - Python
-
Updated
May 6, 2020 - Python
Improve this page
Add a description, image, and links to the fasttext topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the fasttext topic, visit your repo's landing page and select "manage topics."
Example (from TfidfTransformer)
This method expects a list of tuples, instead of an iterable. This means that the entire corpus has to be stored as a lis