You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When Abris started it provided many features that were not available in Spark. This has changed and now avro Spark is capable to do many of those feature.
Abris is still useful because it supported the Confluent Avro format and provides Schema registry client abstractions, but it may be not the best solution to add new features that are already part of Spark.
A Better solution could be to use Spark expressions directly from Abris and just wrap them in Abris layer that will deal with the confluent format. That would allow us to provide the underlying features and at the same time keep the Abris functionality.
This might be challenging since the spark avro expressions are private and not designed to be used this way, but it seems to be doable.
This issue is still up for a discussion.
The text was updated successfully, but these errors were encountered:
For example, za.co.absa.abris.avro.sql.AvroDeserializer is a copy of org.apache.spark.sql.avro.AvroDeserializer (from spark version 2.4) with a few changes. It would be better to somehow reuse the original implementation otherwise, ABRiS' AvroDeserializer will diverge with future spark versions.
I did some investigation and the AvroDeserializer changes were caused by use of ScalaDatumReader (in Abris 3.0.0) that was returning different objects then the ones that are normally on input of AvroDeserializer.
ScalaDatumReader is not used any more, so it looks like we can use the standard version of AvroDeserializer now.
When Abris started it provided many features that were not available in Spark. This has changed and now avro Spark is capable to do many of those feature.
Abris is still useful because it supported the Confluent Avro format and provides Schema registry client abstractions, but it may be not the best solution to add new features that are already part of Spark.
A Better solution could be to use Spark expressions directly from Abris and just wrap them in Abris layer that will deal with the confluent format. That would allow us to provide the underlying features and at the same time keep the Abris functionality.
This might be challenging since the spark avro expressions are private and not designed to be used this way, but it seems to be doable.
This issue is still up for a discussion.
The text was updated successfully, but these errors were encountered: