You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When i try to query the list of tables in a schema from a Redshift DB. I get following error.
I have tried to use query and dbtable options with the same result. When i query the DB with lets say dbeaver I can extract the list of tables with no problem. If I use below script with a "real" table it works fine.
DataBricks: 5.3 (includes Apache Spark 2.4.0, Scala 2.11)
`java.sql.SQLException: Exception thrown in awaitResult:
/databricks/spark/python/pyspark/sql/dataframe.py in show(self, n, truncate, vertical)
377 """
378 if isinstance(truncate, bool) and truncate:
--> 379 print(self._jdf.showString(n, 20, vertical))
380 else:
381 print(self._jdf.showString(n, int(truncate), vertical))
/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in call(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:`
This is the script i use:
`JDBC_URL = "jdbc:redshift://xyz.redshift.amazonaws.com:5439/xyz?user=user&password=pwd"
SQL_QUERY = "SELECT * FROM information_schema.tables t WHERE t.table_schema = 'schema_name' AND t.table_type = 'BASE TABLE'"
REDSHIFT_S3_TEMP_FOLDER = 's3a://xyz'
hi, do we have a solution for this? I am having exactly the same issue of not being able to write to Redshift from Databricks but from DBeaver it works well. I can read from Redshift using Databricks (FYI)..
When i try to query the list of tables in a schema from a Redshift DB. I get following error.
I have tried to use query and dbtable options with the same result. When i query the DB with lets say dbeaver I can extract the list of tables with no problem. If I use below script with a "real" table it works fine.
DataBricks: 5.3 (includes Apache Spark 2.4.0, Scala 2.11)
`java.sql.SQLException: Exception thrown in awaitResult:
/databricks/spark/python/pyspark/sql/dataframe.py in show(self, n, truncate, vertical)
377 """
378 if isinstance(truncate, bool) and truncate:
--> 379 print(self._jdf.showString(n, 20, vertical))
380 else:
381 print(self._jdf.showString(n, int(truncate), vertical))
/databricks/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py in call(self, *args)
1255 answer = self.gateway_client.send_command(command)
1256 return_value = get_return_value(
-> 1257 answer, self.gateway_client, self.target_id, self.name)
1258
1259 for temp_arg in temp_args:`
This is the script i use:
`JDBC_URL = "jdbc:redshift://xyz.redshift.amazonaws.com:5439/xyz?user=user&password=pwd"
SQL_QUERY = "SELECT * FROM information_schema.tables t WHERE t.table_schema = 'schema_name' AND t.table_type = 'BASE TABLE'"
REDSHIFT_S3_TEMP_FOLDER = 's3a://xyz'
df = spark.read
.format("com.databricks.spark.redshift")
.option("url", JDBC_URL)
.option("query", SQL_QUERY)
.option("tempdir", REDSHIFT_S3_TEMP_FOLDER)
.option("forward_spark_s3_credentials", "true")
.load()
df.show()`
The text was updated successfully, but these errors were encountered: