Skip to content
#

natural-language-processing

Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.

Here are 5,655 public repositories matching this topic...

transformers
dhruvsakalley
dhruvsakalley commented Mar 2, 2020

When you look at the variables in the pretrained base uncased BERT the varibles look like list 1. When you do the training from scratch, 2 additional variables per layer are introduced, with suffixes adam_m and adam_v. It would be nice for someone to explain what these variables are? and what is their significance to the process of training?
If one were to manually initialize variables from a pri

ines
ines commented Sep 29, 2019

I was going though the existing enhancement issues again and though it'd be nice to collect ideas for spaCy plugins and related projects. There are always people in the community who are looking for new things to build, so here's some inspiration For existing plugins and projects, check out the spaCy universe.

If you have questions about the projects I suggested,

gensim
chashimo
chashimo commented Mar 17, 2020

I tried selecting hyper parameters of my model following "Tutorial 8: Model Tuning" below:
https://github.com/flairNLP/flair/blob/master/resources/docs/TUTORIAL_8_MODEL_OPTIMIZATION.md

Although I got the "param_selection.txt" file in the result directory, I am not sure how to interpret the file, i.e. which parameter combination to use. At the bottom of the "param_selection.txt" file, I found "

rasa
solyarisoftware
solyarisoftware commented Apr 30, 2020

I propose this topic as feature request, but it's also a documentation issue, as lack of details in user guide paragraph: https://rasa.com/docs/rasa/core/actions/#custom-actions.

What specified in paragraph Execute Actions in Other Code is obscure to me, and details at the API documentation link [Action Server](]https://rasa.com/docs/rasa/api/acti

ludwig
ZeroAurora
ZeroAurora commented Mar 6, 2020

Is your feature request related to a problem? Please describe.
Other related issues: #408 #251
I trained a Chinese model for spaCy, linked it to [spacy's package folder]/data/zh (using spacy link) and want to use that for ludwig. However, when I tried to set the config for ludwig, I received an error, which tell me that there is no way to load the Chinese model.

ValueError: Key ch
dipanjan77
dipanjan77 commented Aug 12, 2019

Description

Add a ReadMe file in the GitHub folder.
Explain usage of the Templates

Other Comments

Principles of NLP Documentation
Each landing page at the folder level should have a ReadMe which explains -
○ Summary of what this folder offers.
○ Why and how it benefits users
○ As applicable - Documentation of using it, brief description etc
Scenarios folder:

Bond-H
Bond-H commented Sep 17, 2019
  • 系统环境:

    • Paddle版本:1.5.1,CPU,无使用其他加速模块
    • 系统: CentOS 6.3
  • 问题描述:

    • 使用paddle.fluid.contrib.slim.Compressor模块进行模型压缩
    • 压缩后的模型,float能正常运行,int8版出现以下错误:
      image
  • 问题复现:

git clone https://github.com/Bond-SYSU/paddle_compress.git
cd paddle_compress
sh run.sh compress   #
forslund
forslund commented May 4, 2020

When using a pocketsphinx wakeword mycroft tries to load a language specific model. If the model doesn't exist the load fails. (report on the forums)

This should be handled by using a fallback mechanism, so if no language specific model exists it should log a warning and fallback to using the english model that is included in mycr

stanza
bwindsor22
bwindsor22 commented May 4, 2020

Hi! Great package!

Both NLTK and Spacy offer the option to install models from local files, as with:

pip install /Users/you/en_core_web_sm-2.2.0.tar.gz

Do you have any thoughts on adding this to stanza? This makes it easier to deploy in an environment where the resources for the cod

loretoparisi
loretoparisi commented Jan 23, 2019

It would be worth to provide a tutorial how to train a simple cross-language classification model using sentencepiece. Supposed to have a given training set and have chosen a model (let'say a simple Word2Vec plus softmax or a LSTM model, etc), how to use the created sentencepiece model (vocabulary/codes) to feed this model for train and inference?

Created by Alan Turing

Wikipedia
Wikipedia
You can’t perform that action at this time.