Skip to main content

All Questions

Tagged with
Filter by
Sorted by
Tagged with
1 vote
1 answer
66 views

Disable inferSchema for JDBC connections

I have an Azure SQL database that I want to query with PySpark. I have to "copy" the data to a temporary table, and then query this temporary table. I would like to use pretty much the same ...
ralpar's user avatar
  • 11
1 vote
1 answer
110 views

“Spark-PySpark Redshift JDBC Write: No suitable driver / ClassNotFoundException: com.amazon.redshift.jdbc42.Driver Errors”

I’m trying to write a DataFrame from Spark (PySpark) to an Amazon Redshift Serverless cluster using the Redshift JDBC driver. I keep running into driver-related errors: • java.sql.SQLException: No ...
Cauder's user avatar
  • 2,657
-1 votes
1 answer
67 views

running normal java non-spark application in spark cluster

I want to run/execute a normal java application which connects to teradata database. I would like to run this java app in spark cluster although my java app is non-spark. Questions are as follows Is ...
ironfreak's user avatar
-1 votes
1 answer
633 views

Create apache spark connection to Oracle DB with JDBC

I am trying to create a connection to my company's Oracle test server with Apache spark and scala. Below is the statement I run in the spark-shell. I am using JDK 8 and have installed the appropriate ...
user8675309's user avatar
2 votes
0 answers
57 views

How to store dataset to jdbc simultatiously to different tables?

We have dataset with column named "table" to save to jdbc depend on table value. We should filter by table and each result store to own table. final Dataset<Row> dust = (create dataset)...
Sitnikov Artem's user avatar
0 votes
1 answer
530 views

resultset.next() issue with com.simba.spark.jdbc42 driver

I'm using JDK 1.8 in my Unix server and using spark.jdbc42 to connect to Azure Databricks. It is giving the following error message in the Unix server at resultset.next(): Exception in thread "...
Chet's user avatar
  • 21
1 vote
0 answers
60 views

Type 10 is not supported error on intellij when I trying to run my code with postgresql [duplicate]

I am new to postgresql and have been trying to connect my scala projects on Intellij to postgresql but I am getting this error. [error] org.postgresql.util.PSQLException: The authentication type 10 is ...
Johhny Effren's user avatar
1 vote
1 answer
184 views

Not able to create bigquery connection from within spark jar when running on dataproc cluster

I want to delete data from bigquery table from within Spark running on dataproc cluster. But I am getting SIGSEGV runtime error when running the Spark application. Here is the full error when trying: ...
ashutosh tripathi's user avatar
0 votes
1 answer
449 views

Caused by: java.sql.SQLException: Out of range value for column 'age' : value age

I keep getting SQLException but I suspect that it is not the problem. Table is : create table person (first varchar(30) DEFAULT NULL, last varchar(30) DEFAULT NULL, gender char(1) DEFAULT NULL, ...
Lyhao's user avatar
  • 1
1 vote
1 answer
578 views

Catch the rows of data causing batchUpdateException

Code: import java.sql.*; import java.util.Arrays; import java.util.Properties; public class BatchInsertErrorHandlingExample { public static void main(String[] args) { Properties myProp = ...
Sarvesh Singh's user avatar
2 votes
1 answer
3k views

Unable to connect to a database using JDBC within Spark with Scala

I’m trying to read data from JDBC in Spark Scala. Below is the code written in Databricks. val df = spark .read .format("jdbc") .option("url", <connection-string>) ....
SanjanaSanju's user avatar
0 votes
0 answers
232 views

Teradata export to HDFS using Spark is Failing with spool space

Hello I am trying to load huge(~5Billion) table from Teradata to Hadoop using spark as below: 1. I am able to display dataframe with 20 records 2. Job is failing with spool space issue. 3. I am able ...
naveen p's user avatar
0 votes
1 answer
675 views

Read spark dataframe based on size(mb/gb)

Please help me in this case, I want to read spark dataframe based on size(mb/gb) not in row count. Suppose i have 500 MB space left for the user in my database and user want to insert 700 MB more ...
Aaryan Roy's user avatar
0 votes
0 answers
2k views

Kerberos authentication failing with spark-submit command

So I'm developing a Spring Batch engine. There are about 5 processes that are running and they get triggered by passing in a particular command line argument. For 4 of these 5 process I am running a ...
BrianCode's user avatar
0 votes
1 answer
85 views

java.sql.SQLException: Column count doesn't match value count at row 1 [Spark / Java /UPSERT]

I'm trying to update values using JDBC on spark and I continue to get the same error . I used the following query statement.addBatch(("INSERT INTO gasoil_summeries " + "VALUES (" ...
user avatar

15 30 50 per page