Skip to content
#

Data Science

Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge from structured and unstructured data. Data scientists perform data analysis and preparation, and their findings inform high-level decisions in many organizations.

Here are 25,429 public repositories matching this topic...

lesteve
lesteve commented Feb 23, 2022

See in #22547

MatplotlibDeprecationWarning: Axes3D(fig) adding itself to the figure is deprecated since 3.4. Pass the keyword argument auto_add_to_figure=False and use fig.add_axes(ax) to suppress this warning. The default value of auto_add_to_figure will change to False in mpl3.5 and True values will no longer work in 3.6.  This is consistent with other Axes classes

We need to rep

superset
rumbin
rumbin commented Jan 31, 2022

The Mixed Time-Series chart type allows for configuring the title of the primary and the secondary y-axis.
However, while only the title of the primary axis is shown next to the axis, the title of the secondary one is placed at the upper end of the axis where it gets hidden by bar values and zoom controls.

How to reproduce the bug

  1. Create a mixed time-series chart
  2. Configure axi

Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

  • Updated Nov 4, 2021
  • Python
pytorch-lightning
dash
tirkarthi
tirkarthi commented Jan 12, 2022

Python 3.10 added suggestions for AttributeError and NameError in the error messages. It seems the suggestions are not stored in the exception object but calculated when Error is displayed. There is a note that that this won't work with IPython but it will be good to see if it's feasible. Opening an issue for discussion.

https://bugs.python.org/issue38530
https://docs.python.org/3/whatsnew/3.

gensim
AnirudhDagar
AnirudhDagar commented Jan 24, 2022

Although the results look nice and ideal in all TensorFlow plots and are consistent across all frameworks, there is a small difference (more of a consistency issue). The result training loss/accuracy plots look like they are sampling on a lesser number of points. It looks more straight and smooth and less wiggly as compared to PyTorch or MXNet.

It can be clearly seen in chapter 6([CNN Lenet](ht

nni
pkubik
pkubik commented Mar 14, 2022

Describe the issue:
During computing Channel Dependencies reshape_break_channel_dependency does following code to ensure that the number of input channels equals the number of output channels:

in_shape = op_node.auxiliary['in_shape']
out_shape = op_node.auxiliary['out_shape']
in_channel = in_shape[1]
out_channel = out_shape[1]
return in_channel != out_channel

This is correct

danieldeutsch
danieldeutsch commented Jun 2, 2021

Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict command opens the file and reads lines for the Predictor. This fails when it tries to load data from my compressed files.