Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

java.lang.AbstractMethodError #1

Open
rhinempi opened this issue Feb 9, 2017 · 5 comments
Open

java.lang.AbstractMethodError #1

rhinempi opened this issue Feb 9, 2017 · 5 comments
Assignees
Labels

Comments

@rhinempi
Copy link
Owner

rhinempi commented Feb 9, 2017

Error log reported by Chu Wang:

17/02/08 18:44:20 ERROR Utils: Aborting task
java.lang.AbstractMethodError: uni.bielefeld.cmg.sparkhit.pipeline.SparkPipe$2SparkBatchAlign.call(Ljava/lang/Object;)Ljava/util/Iterator;
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:124)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply$mcV$sp(PairRDDFunctions.scala:1203)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1203)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13$$anonfun$apply$7.apply(PairRDDFunctions.scala:1203)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1325)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1211)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1$$anonfun$13.apply(PairRDDFunctions.scala:1190)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:70)
at org.apache.spark.scheduler.Task.run(Task.scala:85)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
17/02/08 18:44:20 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)

@rhinempi rhinempi self-assigned this Feb 9, 2017
@rhinempi
Copy link
Owner Author

rhinempi commented Mar 9, 2017

Spark 2.0.0 version API changed return type of call function to "iterator".

Fixed

@rhinempi rhinempi closed this as completed Mar 9, 2017
@rhinempi rhinempi reopened this Mar 9, 2017
@rhinempi rhinempi reopened this Mar 10, 2017
@tom-dyar
Copy link

Hi! I ran into this while just getting set up -- is there a workaround??

@rhinempi
Copy link
Owner Author

Hi Tom,
which Spark version are you using? There is a major change on the interface of the Spark 2.0.0 version. If you are using Sparkhit on the Spark 2.0.0+, choose the Sparkhit 1.0 version. If you have set up a Spark cluster with 2.0.0- version (say 1.6.0), you can still use the Sparkhit 0.8 version by changing the sparkhit executable shell :
name="sparkhit"
version="1.0" # change to 0.8 if you are using Spark version under 2.0.0
spark_version="2.0.0" # only for auto downloading Spark package
Let me know if you have further questions.

@tom-dyar
Copy link

tom-dyar commented Feb 24, 2018 via email

@rhinempi
Copy link
Owner Author

Hi Tom,

Thank you for your advise and comments.
I will update the functions in the next release.

Liren

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants