Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Caused by: java.lang.ClassCastException: java.lang.Byte cannot be cast to java.lang.Integer #4568

Open
1 of 2 tasks
ljingz opened this issue Nov 21, 2024 · 2 comments
Open
1 of 2 tasks
Labels
bug Something isn't working

Comments

@ljingz
Copy link

ljingz commented Nov 21, 2024

Search before asking

  • I searched in the issues and found nothing similar.

Paimon version

0.9

Compute Engine

Spark

Minimal reproduce step

CREATE TABLE tmp.test1234 (
id INT,
order_id STRING,
game_code STRING,
is_delete TINYINT
) USING paimon TBLPROPERTIES (
'snapshot.time-retained'='4 h',
'snapshot.num-retained.min'='1',
'metastore.partitioned-table'='true',
'dynamic-bucket.initial-buckets'='1',
'dynamic-bucket.target-row-num'='6000000',
'file.format'='parquet'
);

insert into tmp.test1234 values (1,'xxx','yyy',1);

select * from tmp.test1234 where is_delete=1;

What doesn't meet your expectations?

Caused by: java.lang.ClassCastException: java.lang.Byte cannot be cast to java.lang.Integer
at org.apache.paimon.shade.org.apache.parquet.schema.PrimitiveComparator$IntComparator.compareNotNulls(PrimitiveComparator.java:85)
at org.apache.paimon.shade.org.apache.parquet.schema.PrimitiveComparator.compare(PrimitiveComparator.java:63)
at org.apache.paimon.shade.org.apache.parquet.column.statistics.Statistics.compareMinToValue(Statistics.java:388)
at org.apache.paimon.shade.org.apache.parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:148)
at org.apache.paimon.shade.org.apache.parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:67)
at org.apache.paimon.shade.org.apache.parquet.filter2.predicate.Operators$Eq.accept(Operators.java:178)
at org.apache.paimon.shade.org.apache.parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:410)
at org.apache.paimon.shade.org.apache.parquet.filter2.statisticslevel.StatisticsFilter.visit(StatisticsFilter.java:67)
at org.apache.paimon.shade.org.apache.parquet.filter2.predicate.Operators$And.accept(Operators.java:379)
at org.apache.paimon.shade.org.apache.parquet.filter2.statisticslevel.StatisticsFilter.canDrop(StatisticsFilter.java:75)
at org.apache.paimon.shade.org.apache.parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:103)
at org.apache.paimon.shade.org.apache.parquet.filter2.compat.RowGroupFilter.visit(RowGroupFilter.java:45)
at org.apache.paimon.shade.org.apache.parquet.filter2.compat.FilterCompat$FilterPredicateCompat.accept(FilterCompat.java:149)
at org.apache.paimon.shade.org.apache.parquet.filter2.compat.RowGroupFilter.filterRowGroups(RowGroupFilter.java:72)
at org.apache.paimon.shade.org.apache.parquet.hadoop.ParquetFileReader.filterRowGroups(ParquetFileReader.java:351)
at org.apache.paimon.shade.org.apache.parquet.hadoop.ParquetFileReader.(ParquetFileReader.java:250)
at org.apache.paimon.format.parquet.ParquetReaderFactory.createReader(ParquetReaderFactory.java:106)
at org.apache.paimon.format.parquet.ParquetReaderFactory.createReader(ParquetReaderFactory.java:72)
at org.apache.paimon.io.FileRecordReader.(FileRecordReader.java:82)
at org.apache.paimon.operation.RawFileSplitRead.createFileReader(RawFileSplitRead.java:263)
at org.apache.paimon.operation.RawFileSplitRead.lambda$createReader$1(RawFileSplitRead.java:169)
at org.apache.paimon.mergetree.compact.ConcatRecordReader.create(ConcatRecordReader.java:53)
at org.apache.paimon.operation.RawFileSplitRead.createReader(RawFileSplitRead.java:177)
at org.apache.paimon.operation.RawFileSplitRead.createReader(RawFileSplitRead.java:144)
at org.apache.paimon.table.AppendOnlyFileStoreTable$1.reader(AppendOnlyFileStoreTable.java:128)
at org.apache.paimon.table.source.AbstractDataTableRead.createReader(AbstractDataTableRead.java:82)
at org.apache.paimon.spark.PaimonPartitionReaderFactory.$anonfun$createReader$1(PaimonPartitionReaderFactory.scala:55)
at org.apache.paimon.spark.PaimonPartitionReader.readSplit(PaimonPartitionReader.scala:90)
at org.apache.paimon.spark.PaimonPartitionReader.(PaimonPartitionReader.scala:42)
at org.apache.paimon.spark.PaimonPartitionReaderFactory.createReader(PaimonPartitionReaderFactory.scala:56)
at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.advanceToNextIter(DataSourceRDD.scala:84)
at org.apache.spark.sql.execution.datasources.v2.DataSourceRDD$$anon$1.hasNext(DataSourceRDD.scala:63)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43)
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:388)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:893)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:893)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:367)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:331)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93)
at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:166)
at org.apache.spark.scheduler.Task.run(Task.scala:141)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64)
at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623)
... 3 more

Anything else?

No response

Are you willing to submit a PR?

  • I'm willing to submit a PR!
@ljingz ljingz added the bug Something isn't working label Nov 21, 2024
@ljingz ljingz changed the title [Bug] [Bug] Caused by: java.lang.ClassCastException: java.lang.Byte cannot be cast to java.lang.Integer Nov 22, 2024
@liyubin117
Copy link
Contributor

I found it works well in Flink using the following statements:

Flink SQL> insert into test12345 values (1,'a','b',cast(1 as tinyint));
[INFO] Submitting SQL update statement to the cluster...
[INFO] SQL update statement has been successfully submitted to the cluster:
Job ID: 9410b9015e5f8beb99731a8d2a363d53

Flink SQL> select * from test12345;
[INFO] Result retrieval cancelled.

@ChaomingZhangCN
Copy link
Contributor

@ljingz FIxed in #4365

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants