data-science
Here are 12,124 public repositories matching this topic...
Not sure when this happened but i love the new left-hand side navigation https://scikit-learn.org/dev/user_guide.html
(@adrinjalali did this maybe?)
However, when clicking the different entries, the result is inconsistent. For some, it shows a TOC that expands the existing toc with m
I just started going through the jupyter notebooks (great stuff, by the way), and when running the cells the first time I would always get some error like Polygon has no attribute normed
. It turns out that the normed
option in plt.hist
has been deprecated (in matplotlib
version 3.1.1 and later as far as I can tell), and should instead be replaced with density
(they appear to operate simi
Alexnet implementation in tensorflow has incomplete architecture where 2 convolution neural layers are missing. This issue is in reference to the python notebook mentioned below.
I was going though the existing enhancement issues again and though it'd be nice to collect ideas for spaCy plugins and related projects. There are always people in the community who are looking for new things to build, so here's some inspiration
If you have questions about the projects I suggested,
-
Updated
Jan 29, 2020 - Python
Note 1: This is not the same tab-completion bug for which many issues have already been opened (the one where a massive number of objects from the global namespace are displayed). This issue specifically has to do with unwanted abbreviation of filename paths. I haven't yet found any other reports of this issue.
Note 2: I also posted this to stackoverflow but am posting again here since I didn't
-
Updated
May 18, 2020
-
Updated
May 18, 2020 - Jupyter Notebook
When implemented, there are some community forum requests to follow up on: https://community.plotly.com/t/clientside-callbacks-equivalent-for-dash-callback-context/29305
/Users/travis/build/ray-project/ray/python/ray/node.py:533: DeprecationWarning: Redis.hmset() is deprecated. Use Redis.hset() instead.
redis_client.hmset("webui", {"url": self._webui_url})
/Users/travis/build/ray-project/ray/python/ray/worker.py:358: DeprecationWarning: Redis.hmset() is deprecated. Use Redis.hset() instead.
"run_on_other_drivers": str(run_on_other_drivers),
Example (from TfidfTransformer)
if isinstance(docs[0], tuple):
docs = [docs]
return [self.gensim_model[doc] for doc in docs]
This method expects a list of tuples, instead of an iterable. This means that the entire corpus has to be stored as a lis
-
Updated
Oct 16, 2019 - Jupyter Notebook
-
Updated
May 20, 2020
-
Updated
Mar 31, 2020
-
Updated
May 13, 2020
load_csv line can be since there's columns_to_ignore
param's supported
data, labels = load_csv('titanic_dataset.csv', target_column=0, columns_to_ignore=[2, 7], categorical_labels=True, n_classes=2)
and we don't need to do that in preprocess()
def preprocess(passengers):
for i in range(len(passengers)):
passengers[i][1] = 1. if passengers[i][1] == 'female' else 0.
-
Updated
May 18, 2020
Describe the bug
Calling Predictor.get_gradients() returns an empty dictionary
To Reproduce
I am replicating the binary sentiment classification tasked described in the paper 'Attention is not Explanation ' (Jain and Wallace 2019 - https://arxiv.org/pdf/1902.10186.pdf).
My first experiment is on the Stanford Sentiment TreeBank Dataset. I need to measure the correlation between th
i'm a newbie in programming. I try to use this library. it's very useful for me.
I want to show centroid in K-means clustering. how to show it? thank u so much..
-
Updated
Dec 29, 2019 - Jupyter Notebook
Description
Add Azure notebook to our SETUP doc.
I tested google colab and Azure notebook to run reco-repo without requiring creating any DSVM or compute by myself, and it works really well with simple tweaks to the notebooks (e.g. for some libs, should install manually).
I think it would be good to add at least Azure notebook to our SETUP doc, where users can easily test out our repo w/o
Describe the bug
The input field to create a project from a URL does not trim its input when validating it.
To Reproduce
Steps to reproduce the behavior:
- Go to 'Create Project' -> 'Web Addresses (URL)'
- Paste " http://api.worldbank.org/countries/all/indicators/SP.POP.TOTL?date=2000:2001" with the leading whitespac
As you can see in this example from the documentation docs, there is a linestyles argument, but it doesn't show in the legend. This kind of defies the purpose, printing black/white you still can't tell which is which (except from the markers, but often the markers are hard to read because the line is too wide relativel
Hi,
I'm new to tpot but I got this error. I understand that score function can take strings, but I got the following error when using TPOTClassifier.
ValueError Traceback (most recent call last)
in
----> 1 tpot.score(X_test, y_test)~/miniconda3/envs/ml
-
Updated
May 12, 2020 - Python
Chapter 12 typo
(p380) In last sentence of 1st paragraph and first sentence of 2nd paragraph, "Pitt" should be "Pitts".
(p384) In a picture, "3nd Layer" should be "3rd Layer".
(p385) In 1st paragraph, "the o superscript" should be "the out superscript".
(p406) In 1st code block, "print('Training accuracy: ..)" should be "print('Test accuracy: ..)"
(p410) In 1st paragraph,
Let's enable loading weights from a URL directly
Option 1:
Automate it with our current API
Trainer.load_from_checkpoint('http://')
Option 2:
Have a separate method
Trainer.load_from_checkpoint_at_url('http://')
Resources
We can use this under the hood:
(https://pytorch.org/docs/stable/hub.html#torch.hub.load_state_dict_from_url)
Any tho
Improve this page
Add a description, image, and links to the data-science topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the data-science topic, visit your repo's landing page and select "manage topics."
In Keras documentation, glorot_uniform says that the initializer is using Glorot Uniform from this paper. However, the Keras implementation is totally different from the equation on the paper. Also, there are some arguments such as mode ='fan_avg' is the default. It should be same as the referenced paper. 'fan_sum'. Golort uniform is shown