Skip to content
#

Data Science

Data science is an inter-disciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge from structured and unstructured data. Data scientists perform data analysis and preparation, and their findings inform high-level decisions in many organizations.

Here are 17,858 public repositories matching this topic...

sh-biswas
sh-biswas commented Mar 9, 2021

It appears that the docs for Logistic Regression differ based on solvers and penalties. The "penalty" parameter states that "The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties," while the "solver" parameter states that "‘newton-cg’, ‘lbfgs’, ‘sag’ and ‘saga’ handle L2 or no penalty" (attaching some screenshots). This was actually a little unclear to me, as I wasn't sure if the n

superset
zuzana-vej
zuzana-vej commented Mar 16, 2021

Currently the tabs on SQL Lab queries show green circle when a query is running as well as when a query has successfully finished.

The green circles for "query running" and "query completed" could be made more distinct - different color, or having the "query running" circle be hollow, and the "query completed" one filled in. This would enable users, who might go to SQL Lab and trigger few quer

Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

  • Updated Feb 18, 2021
  • Python
dash
vdonato
vdonato commented Mar 19, 2021

This bug is actually caused by #2989, but I'm filing it separately as fixing #2989 may end up being a few days worth of effort while a small workaround to fix this apparently incorrect behavior (that's just a bandaid until the real issue is fixed) would likely only take a couple hours.

What happens here is that a file calls config.get_option on import, causing config files to be parsed, then wh

pytorch-lightning
gensim
mahnerak
mahnerak commented Jan 2, 2021

While setting train_parameters to False very often we also may consider disabling dropout/batchnorm, in other words, to run the pretrained model in eval mode.
We've done a little modification to PretrainedTransformerEmbedder that allows providing whether the token embedder should be forced to eval mode during the training phase.

Do you this feature might be handy? Should I open a PR?