You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
java.io.IOException: OS command error exit with return code: 1, error message: log4j: Using URL [file:/home/hdoop/apache-kylin-4.0.1-bin-spark3/conf/spark-driver-log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/home/hdoop/apache-kylin-4.0.1-bin-spark3/conf/spark-driver-log4j.properties
log4j: Parsing for [root] with value=[INFO,hdfs].
log4j: Level token is [INFO].
log4j: Category root set to INFO
log4j: Parsing appender named "hdfs".
log4j: Parsing layout options for "hdfs".
log4j: Setting property [conversionPattern] to [%d{ISO8601} %-5p [%t] %c{2} : %m%n].
log4j: End of parsing for "hdfs".
log4j: Setting property [hdfsWorkingDir] to [s3a://bucketpath].
log4j: Setting property [kerberosPrincipal] to [].
log4j: Setting property [logPath] to [s3a://bucketpath/execute_output.json.1654872379334.log].
log4j: Setting property [kerberosEnable] to [false].
log4j: Setting property [kerberosKeytab] to [].
log4j: Setting property [logQueueCapacity] to [5000].
log4j: Setting property [flushInterval] to [5000].
log4j:WARN SparkDriverHdfsLogAppender starting ...
log4j:WARN hdfsWorkingDir -> s3a://devopscmd-prod/apachekylin/kylin_metadata/
log4j:WARN spark.driver.log4j.appender.hdfs.File -> s3a://bucketpath/execute_output.json.1654872379334.log
log4j:WARN kerberosEnable -> false
log4j:WARN SparkDriverHdfsLogAppender started ...
log4j: Parsed "hdfs" options.
log4j: Parsing for [org.springframework] with value=[WARN].
log4j: Level token is [WARN].
log4j: Category org.springframework set to WARN
log4j: Handling log4j.additivity.org.springframework=[null]
log4j: Parsing for [org.apache.spark] with value=[WARN].
log4j: Level token is [WARN].
log4j: Category org.apache.spark set to WARN
log4j: Handling log4j.additivity.org.apache.spark=[null]
log4j: Parsing for [org.apache.kylin] with value=[DEBUG].
log4j: Level token is [DEBUG].
log4j: Category org.apache.kylin set to DEBUG
log4j: Handling log4j.additivity.org.apache.kylin=[null]
log4j: Finished configuring.
log4j:WARN SparkDriverHdfsLogAppender flush log when shutdown ...
The command is:
export HADOOP_CONF_DIR=/home/hdoop/apache-kylin-4.0.1-bin-spark3/hadoop_conf && /home/hdoop/spark-3.1.1-bin-hadoop3.2/bin/spark-submit --class org.apache.kylin.engine.spark.application.SparkEntry --conf 'spark.yarn.queue=default' --conf 'spark.history.fs.logDirectory=s3a://bucket/apachekylin/' --conf 'spark.driver.extraJavaOptions=-XX:+CrashOnOutOfMemoryError -Dlog4j.configuration=file:/home/hdoop/apache-kylin-4.0.1-bin-spark3/conf/spark-driver-log4j.properties -Dkylin.kerberos.enabled=false -Dkylin.hdfs.working.dir=s3a://devopscmd-prod/apachekylin/kylin_metadata/ -Dspark.driver.log4j.appender.hdfs.File=s3a://bucketpath/execute_output.json.1654872379334.log -Dlog4j.debug=true -Dspark.driver.rest.server.address=ipaddress:7070 -Dspark.driver.param.taskId=ddf325d9-5232-422e-a5ff-0f35e4b0949c-00 -Dspark.driver.local.logDir=/home/hdoop/apache-kylin-4.0.1-bin-spark3/logs/spark' --conf 'spark.master=local' --conf 'spark.hadoop.yarn.timeline-service.enabled=false' --conf 'spark.driver.cores=1' --conf 'spark.eventLog.enabled=true' --conf 'spark.eventLog.dir=s3a://bucket/apachekylin/' --conf 'spark.driver.memory=2G' --conf 'spark.driver.memoryOverhead=512M' --conf 'spark.sql.autoBroadcastJoinThreshold=-1' --conf 'spark.sql.adaptive.enabled=false' --conf 'spark.driver.extraClassPath=/home/hdoop/apache-kylin-4.0.1-bin-spark3/lib/kylin-parquet-job-4.0.1.jar' --name job_step_ddf325d9-5232-422e-a5ff-0f35e4b0949c-00 --jars /home/hdoop/apache-kylin-4.0.1-bin-spark3/lib/kylin-parquet-job-4.0.1.jar /home/hdoop/apache-kylin-4.0.1-bin-spark3/lib/kylin-parquet-job-4.0.1.jar -className org.apache.kylin.engine.spark.job.ResourceDetectBeforeCubingJob s3a://bucketpath/ddf325d9-5232-422e-a5ff-0f35e4b0949c-00_jobId
at org.apache.kylin.common.util.CliCommandExecutor.execute(CliCommandExecutor.java:98)
at org.apache.kylin.engine.spark.job.NSparkExecutable.runSparkSubmit(NSparkExecutable.java:282)
at org.apache.kylin.engine.spark.job.NSparkExecutable.doWork(NSparkExecutable.java:168)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:206)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:94)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:206)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:113)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
``
The text was updated successfully, but these errors were encountered:
fakhar101
changed the title
Facing issue while building Kylin Cube
Facing issue while building Kylin Cube need help to fix it Cube is not building, logs are given below
Jun 13, 2022
java.io.IOException: OS command error exit with return code: 1, error message: log4j: Using URL [file:/home/hdoop/apache-kylin-4.0.1-bin-spark3/conf/spark-driver-log4j.properties] for automatic log4j configuration.
log4j: Reading configuration from URL file:/home/hdoop/apache-kylin-4.0.1-bin-spark3/conf/spark-driver-log4j.properties
log4j: Parsing for [root] with value=[INFO,hdfs].
log4j: Level token is [INFO].
log4j: Category root set to INFO
log4j: Parsing appender named "hdfs".
log4j: Parsing layout options for "hdfs".
log4j: Setting property [conversionPattern] to [%d{ISO8601} %-5p [%t] %c{2} : %m%n].
log4j: End of parsing for "hdfs".
log4j: Setting property [hdfsWorkingDir] to [s3a://bucketpath].
log4j: Setting property [kerberosPrincipal] to [].
log4j: Setting property [logPath] to [s3a://bucketpath/execute_output.json.1654872379334.log].
log4j: Setting property [kerberosEnable] to [false].
log4j: Setting property [kerberosKeytab] to [].
log4j: Setting property [logQueueCapacity] to [5000].
log4j: Setting property [flushInterval] to [5000].
log4j:WARN SparkDriverHdfsLogAppender starting ...
log4j:WARN hdfsWorkingDir -> s3a://devopscmd-prod/apachekylin/kylin_metadata/
log4j:WARN spark.driver.log4j.appender.hdfs.File -> s3a://bucketpath/execute_output.json.1654872379334.log
log4j:WARN kerberosEnable -> false
log4j:WARN SparkDriverHdfsLogAppender started ...
log4j: Parsed "hdfs" options.
log4j: Parsing for [org.springframework] with value=[WARN].
log4j: Level token is [WARN].
log4j: Category org.springframework set to WARN
log4j: Handling log4j.additivity.org.springframework=[null]
log4j: Parsing for [org.apache.spark] with value=[WARN].
log4j: Level token is [WARN].
log4j: Category org.apache.spark set to WARN
log4j: Handling log4j.additivity.org.apache.spark=[null]
log4j: Parsing for [org.apache.kylin] with value=[DEBUG].
log4j: Level token is [DEBUG].
log4j: Category org.apache.kylin set to DEBUG
log4j: Handling log4j.additivity.org.apache.kylin=[null]
log4j: Finished configuring.
log4j:WARN SparkDriverHdfsLogAppender flush log when shutdown ...
The command is:
export HADOOP_CONF_DIR=/home/hdoop/apache-kylin-4.0.1-bin-spark3/hadoop_conf && /home/hdoop/spark-3.1.1-bin-hadoop3.2/bin/spark-submit --class org.apache.kylin.engine.spark.application.SparkEntry --conf 'spark.yarn.queue=default' --conf 'spark.history.fs.logDirectory=s3a://bucket/apachekylin/' --conf 'spark.driver.extraJavaOptions=-XX:+CrashOnOutOfMemoryError -Dlog4j.configuration=file:/home/hdoop/apache-kylin-4.0.1-bin-spark3/conf/spark-driver-log4j.properties -Dkylin.kerberos.enabled=false -Dkylin.hdfs.working.dir=s3a://devopscmd-prod/apachekylin/kylin_metadata/ -Dspark.driver.log4j.appender.hdfs.File=s3a://bucketpath/execute_output.json.1654872379334.log -Dlog4j.debug=true -Dspark.driver.rest.server.address=ipaddress:7070 -Dspark.driver.param.taskId=ddf325d9-5232-422e-a5ff-0f35e4b0949c-00 -Dspark.driver.local.logDir=/home/hdoop/apache-kylin-4.0.1-bin-spark3/logs/spark' --conf 'spark.master=local' --conf 'spark.hadoop.yarn.timeline-service.enabled=false' --conf 'spark.driver.cores=1' --conf 'spark.eventLog.enabled=true' --conf 'spark.eventLog.dir=s3a://bucket/apachekylin/' --conf 'spark.driver.memory=2G' --conf 'spark.driver.memoryOverhead=512M' --conf 'spark.sql.autoBroadcastJoinThreshold=-1' --conf 'spark.sql.adaptive.enabled=false' --conf 'spark.driver.extraClassPath=/home/hdoop/apache-kylin-4.0.1-bin-spark3/lib/kylin-parquet-job-4.0.1.jar' --name job_step_ddf325d9-5232-422e-a5ff-0f35e4b0949c-00 --jars /home/hdoop/apache-kylin-4.0.1-bin-spark3/lib/kylin-parquet-job-4.0.1.jar /home/hdoop/apache-kylin-4.0.1-bin-spark3/lib/kylin-parquet-job-4.0.1.jar -className org.apache.kylin.engine.spark.job.ResourceDetectBeforeCubingJob s3a://bucketpath/ddf325d9-5232-422e-a5ff-0f35e4b0949c-00_jobId
at org.apache.kylin.common.util.CliCommandExecutor.execute(CliCommandExecutor.java:98)
at org.apache.kylin.engine.spark.job.NSparkExecutable.runSparkSubmit(NSparkExecutable.java:282)
at org.apache.kylin.engine.spark.job.NSparkExecutable.doWork(NSparkExecutable.java:168)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:206)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:94)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:206)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:113)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
``
The text was updated successfully, but these errors were encountered: