Skip to content
#

spark

Here are 4,662 public repositories matching this topic...

hanbaoan123
hanbaoan123 commented Feb 24, 2020

Issue Description

When I run the example cartpole with the default parameters, it can not converge to the max reward 200, I wonder what went wrong.
360截图20200224095510956

Version Information

Please indicate relevant versions, including, if relevant:

  • Deeplearning4j versi
cube.js
flink-learning

flink learning blog. http://www.54tianzhisheng.cn 含 Flink 入门、概念、原理、实战、性能调优、源码解析等内容。涉及 Flink Connector、Metrics、Library、DataStream API、Table API & SQL 等内容的学习案例,还有 Flink 落地应用的大型项目案例(PVUV、日志存储、百亿数据实时去重、监控告警)分享。欢迎大家支持我的专栏《大数据实时计算引擎 Flink 实战与性能优化》

  • Updated May 13, 2020
  • Java
fennuzhichui
fennuzhichui commented Jan 2, 2020

how to define java8 when submit application use spark-submit


name: Bug report/Feature request/Question
about: Create a report to help us improve
title: ''
label: bug/enhancement/question
assignees: ''

Environment:

  • Java version:
  • Scala version:
  • Spark version:
  • PyTorch and Python version:
  • OS and version:

Checklist:

  • Did you check if your bug/feature/
thingsboard
dportabella
dportabella commented May 28, 2016

if I understood it corretly from README.MD, we can install like this:

$ git clone https://github.com/donnemartin/dev-setup.git && cd dev-setup
$ ./.dots bootstrap osxprep brew osx

and later when we need datastores, we run

$ cd ~/dev-setup
$ ./.dots datastores

I understand that bootstrap copies the dot files to the home directory, such as .bash_profile and .exports.
but

Open Source Fast Scalable Machine Learning Platform For Smarter Applications: Deep Learning, Gradient Boosting & XGBoost, Random Forest, Generalized Linear Modeling (Logistic Regression, Elastic Net), K-Means, PCA, Stacked Ensembles, Automatic Machine Learning (AutoML), etc.

  • Updated May 20, 2020
  • Jupyter Notebook
cfregly
cfregly commented Apr 17, 2019
  File "/root/miniconda3/bin/pipeline", line 11, in <module>
    sys.exit(_main())
  File "/root/miniconda3/lib/python3.7/site-packages/cli_pipeline/cli_pipeline.py", line 5734, in _main
    _fire.Fire()
  File "/root/miniconda3/lib/python3.7/site-packages/fire/core.py", line 127, in Fire
    component_trace = _Fire(component, args, context, name)
  Fil
cli
yiheng
yiheng commented Jul 11, 2018

Spark 2.3 officially support run on kubernetes. While our guide of "Run on Kubernetes" is still based on a special version of Spark 2.2, which is out of date. We need to:

  1. update that document to Spark 2.3
  2. release the corresponding docker images.
yeikel
yeikel commented Jan 4, 2019

According to the generated build

The commands to launch are the following :

docker pull andypetrella/spark-notebook:0.7.0-scala-2.11.8-spark-2.1.1-hadoop-2.7.2-with-hive
docker run -p 9001:9001 andypetrella/spark-notebook:0.7.0-scala-2.11.8-spark-2.1.1-hadoop-2.7.2-with-hive

Using that image (and I think it i

malleshjm
malleshjm commented Apr 30, 2018

Hello,

I was able to run python scripts in dev mode using the steps provided in documentation. but for production, I am not sure which all folders to keep and the process to follow. editing the local conf and local sh files and running the server_deploy script, I was able to generate the server jar. But still i had to manually add the python context and upload my egg file.
Can someone pleas

ramkumarkb
ramkumarkb commented Feb 5, 2020

I have noticed a small error in the documentation around S3 configurations:
https://docs.delta.io/latest/delta-storage.html#amazon-s3

On the read part, it should be load and not save:
spark.read.format("delta").load("s3a://<your-s3-bucket>/<path>/<to>/<delta-table>")

Also, I have successfully tested Delta 0.5.0 with on-premise S3 - https://min.io
There were some quirks around the

mmlspark
ttpro1995
ttpro1995 commented Nov 13, 2019

Version

com.microsoft.ml.spark:mmlspark_2.11:jar:0.18.1
spark= 2.4.3
scala=2.11.12

data (csv with header) https://gist.github.com/ttpro1995/69051647a256af912803c9a16040f43a

download data and save as csv file, put into folder /data/public/HIGGS/higgs.test.predictioncsv

val data = spark.read.option("header","true").option("inferSchema", "true").csv("/data/public/HIGGS

Improve this page

Add a description, image, and links to the spark topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the spark topic, visit your repo's landing page and select "manage topics."

Learn more

You can’t perform that action at this time.