Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[SPARK-41379][SS][PYTHON] Provide cloned spark session in DataFrame i…
…n user function for foreachBatch sink in PySpark ### What changes were proposed in this pull request? This PR proposes to provide cloned spark session in DataFrame in user function for foreachBatch sink in PySpark. ### Why are the changes needed? It's arguable a bug - previously given DataFrame is associated with two different SparkSessions, 1) one which runs the streaming query (accessed via `df.sparkSession`) 2) one which microbatch execution "cloned" (accessed via `df._jdf.sparkSession()`). If users pick the 1), it destroys the purpose of cloning spark session, e.g. disabling AQE. Also, which session is picked up depends on the underlying implementation of "each" method in DataFrame, which would give inconsistency. Following is a problematic example: ``` def user_func(batch_df, batch_id): batch_df.createOrReplaceTempView("updates") ... # what is the right way to refer the temp view "updates"? ``` Before this PR, the only way to refer the temp view "updates" is, using "internal" field in DataFrame, `_jdf`. That said, running a new query via `batch_df._jdf.sparkSession()` can only see the temp view defined in the user function. We would like to make this possible without enforcing end users to access "internal" field. After this PR, they can (and should) use `batch_df.sparkSession` instead. ### Does this PR introduce _any_ user-facing change? Yes, this PR makes in sync to which spark session to use. Users can use df.sparkSession to access cloned spark session, which will be the same with the spark session the methods in DataFrame will use. ### How was this patch tested? New test case which fails with current master branch. Closes apache#38906 from HeartSaVioR/SPARK-41379. Authored-by: Jungtaek Lim <[email protected]> Signed-off-by: Jungtaek Lim <[email protected]>
- Loading branch information