diff --git a/plugins/parquet/README.md b/plugins/parquet/README.md index 4dd5bb63346..bb266c89357 100644 --- a/plugins/parquet/README.md +++ b/plugins/parquet/README.md @@ -47,7 +47,7 @@ dataset := ParquetIO.Read(layout, '/source/directory/data.parquet'); The Write function empowers ECL programmers to write ECL datasets to Parquet files. By leveraging the Parquet format's columnar storage capabilities, this function provides efficient compression and optimized storage for data. There is an optional argument that sets the overwrite behavior of the plugin. The default value is false meaning it will throw an error if the target file already exists. If overwrite is set to true the plugin will check for files that match the target path passed in and delete them first before writing new files. -The Parquet Plugin supports all available Arrow compression types. Specifying the compression when writing is optional and defaults to Uncompressed. The options for compressing your files are Snappy, GZip, Brotli, LZ4, LZ4Frame, LZ4Hadoop, ZSTD, Uncompressed. +The Parquet Plugin supports all available Arrow compression types. Specifying the compression when writing is optional and defaults to Uncompressed. The options for compressing your files are Snappy, GZip, Brotli, LZ4, ZSTD, Uncompressed. ``` overwriteOption := TRUE; diff --git a/plugins/parquet/parquetembed.cpp b/plugins/parquet/parquetembed.cpp index a3bd7cf5aeb..9a9c06dd109 100644 --- a/plugins/parquet/parquetembed.cpp +++ b/plugins/parquet/parquetembed.cpp @@ -1874,10 +1874,6 @@ ParquetEmbedFunctionContext::ParquetEmbedFunctionContext(const IContextLogger &_ compressionOption = arrow::Compression::BROTLI; else if (strieq(val, "lz4")) compressionOption = arrow::Compression::LZ4; - else if (strieq(val, "lz4frame")) - compressionOption = arrow::Compression::LZ4_FRAME; - else if (strieq(val, "lz4hadoop")) - compressionOption = arrow::Compression::LZ4_HADOOP; else if (strieq(val, "zstd")) compressionOption = arrow::Compression::ZSTD; else if (strieq(val, "uncompressed"))