Skip to content
#

R

r logo

R is a free programming language and software environment for statistical computing and graphics. R has a wide variety of statistical linear and non-linear modeling and provides numerous graphical techniques.

Here are 21,043 public repositories matching this topic...

graemerocher
graemerocher commented Oct 2, 2019

Currently register looks like:

   private static <T> void register(Map<T, T> substitutions, T annotated, T original, T target) {
        if (annotated != null) {
            guarantee(!substitutions.containsKey(annotated) || substitutions.get(annotated) == original || substitutions.get(annotated) == target, "Already registered: %s", annotated);
            substitutions.put(annotated,
dash
cherfongfoo
cherfongfoo commented Oct 12, 2020
from fbprophet.serialize import model_to_json, model_from_json
with open("serialized_model.json", "r") as fin:
    fb_model = model_from_json(json.load(fin))

df_cv = cross_validation(fb_model, ............)

It produces this error,

File "C:\Users\XXXXX\.conda\envs\fbprophet7\lib\site-packages\fbprophet\diagnostics.py", line 295, in prophet_copy
    stan_backend=m.st
LightGBM
jameslamb
jameslamb commented Feb 2, 2021

Summary

scikit-learn supports a test that people writing scikit-learn extensions can use to check API compatibility with the rest of the scikit-learn ecosystem. Such a test should be added for the code in python-package/lightgbm/dask.py.

Motivation

The discussion in #3883 focused a lot on compatibility of the dask estimators, like DaskLGBMClassifier, with the broader scikit-le

pseudotensor
pseudotensor commented Jan 12, 2021

Problem: the approximate method can still be slow for many trees
catboost version: master
Operating System: ubuntu 18.04
CPU: i9
GPU: RTX2080

Would be good to be able to specify how many trees to use for shapley. The model.predict and prediction_type versions allow this. lgbm/xgb allow this.

H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.

  • Updated Feb 10, 2021
  • Jupyter Notebook
data-science-at-the-command-line
jeroenjanssens
jeroenjanssens commented Jun 10, 2020

I'm happy to announce that I'll be writing the second edition of Data Science at the Command Line (O'Reilly, 2014). This issue explains why I think a second edition is needed, lists what changes I plan to make, and presents a tentative outline. Finally, I have a few words about the process and giving feedback.

Why a second edition?

While the command line as a technology and as a way of w

StrikerRUS
StrikerRUS commented Oct 18, 2019

I'm sorry if I missed this functionality, but CLI version hasn't it for sure (I saw the related code only in generate_code_examples.py). I guess it will be very useful to eliminate copy-paste phase, especially for large models.

Of course, piping is a solution, but not for development in Jupyter Notebook, for example.

Created by Ross Ihaka, Robert Gentleman

Released August 1993

Website
www.r-project.org
Wikipedia
Wikipedia

Related Topics

language