Fwd: running sparkPi example on spark-0.8.1 with yarn 2.2.0

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Fwd: running sparkPi example on spark-0.8.1 with yarn 2.2.0

Izhar ul Hassan
Hi,

I am unable to run the sparkPi example after installing the latest version of spark with yarn 2.2.0.

Command:

SPARK_JAR=./assembly/target/scala-2.9.3/spark-assembly-0.8.1-incubating-hadoop2.2.0.jar \ ./spark-class org.apache.spark.deploy.yarn.Client \ --jar examples/target/scala-2.9.3/spark-examples-assembly-0.8.1-incubating.jar \ --class org.apache.spark.examples.SparkPi \ --args yarn-standalone \ --num-workers 3 \ --master-memory 4g \ --worker-memory 2g \ --worker-cores 1

OutPut:

SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/home/hduser/DataAnalysis/spark/tools/target/scala-2.9.3/spark-tools-assembly-0.8.1-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hduser/DataAnalysis/spark/assembly/target/scala-2.9.3/spark-assembly-0.8.1-incubating-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 13/12/29 23:33:19 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 13/12/29 23:33:20 INFO RMProxy: Connecting to ResourceManager at sp081/10.20.20.7:8040 13/12/29 23:33:20 INFO Client: Got Cluster metric info from ApplicationsManager (ASM), number of NodeManagers: 1 13/12/29 23:33:20 INFO Client: Queue info ... queueName: default, queueCurrentCapacity: 0.0, queueMaxCapacity: 1.0, queueApplicationCount = 3, queueChildQueueCount = 0 13/12/29 23:33:20 INFO Client: Max mem capabililty of a single resource in this cluster 8192 13/12/29 23:33:20 INFO Client: Preparing Local resources 13/12/29 23:33:21 INFO Client: Uploading file:/home/hduser/DataAnalysis/spark/examples/target/scala-2.9.3/spark-examples-assembly-0.8.1-incubating.jar to hdfs://sp081:9000/user/hduser/.sparkStaging/application_1388331358746_0005/spark-examples-assembly-0.8.1-incubating.jar 13/12/29 23:33:24 INFO Client: Uploading file:/home/hduser/DataAnalysis/spark/assembly/target/scala-2.9.3/spark-assembly-0.8.1-incubating-hadoop2.2.0.jar to hdfs://sp081:9000/user/hduser/.sparkStaging/application_1388331358746_0005/spark-assembly-0.8.1-incubating-hadoop2.2.0.jar 13/12/29 23:33:25 INFO Client: Setting up the launch environment 13/12/29 23:33:25 INFO Client: Setting up container launch context 13/12/29 23:33:25 INFO Client: Command for starting the Spark ApplicationMaster: $JAVA_HOME/bin/java -server -Xmx4096m -Djava.io.tmpdir=$PWD/tmp org.apache.spark.deploy.yarn.ApplicationMaster --class org.apache.spark.examples.SparkPi --jar examples/target/scala-2.9.3/spark-examples-assembly-0.8.1-incubating.jar --args 'yarn-standalone' --worker-memory 2048 --worker-cores 1 --num-workers 3 1> /stdout 2> /stderr 13/12/29 23:33:25 INFO Client: Submitting application to ASM 13/12/29 23:33:25 INFO YarnClientImpl: Submitted application application_1388331358746_0005 to ResourceManager at sp081/10.20.20.7:8040 13/12/29 23:33:26 INFO Client: Application report from ASM: application identifier: application_1388331358746_0005 appId: 5 clientToAMToken: null appDiagnostics: appMasterHost: N/A appQueue: default appMasterRpcPort: 0 appStartTime: 1388360005512 yarnAppState: ACCEPTED distributedFinalState: UNDEFINED appTrackingUrl: sp081:8088/proxy/application_1388331358746_0005/ appUser: hduser

yarn-user-resourcemanager.log shows a lot of errors like:


2013-12-29 23:35:18,793 ERROR org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl: Can't handle this event at current state org.apache.hadoop.yarn.state.InvalidStateTransitonException: Invalid event: UNREGISTERED at LAUNCHED at org.apache.hadoop.yarn.state.StateMachineFactory.doTransition(StateMachineFactory.java:305) at org.apache.hadoop.yarn.state.StateMachineFactory.access$300(StateMachineFactory.java:46) at org.apache.hadoop.yarn.state.StateMachineFactory$InternalStateMachine.doTransition(StateMachineFactory.java:448) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:625) at org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptImpl.handle(RMAppAttemptImpl.java:104) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:566) at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher.handle(ResourceManager.java:547) at org.apache.hadoop.yarn.event.AsyncDispatcher.dispatch(AsyncDispatcher.java:134) at org.apache.hadoop.yarn.event.AsyncDispatcher$1.run(AsyncDispatcher.java:81) at java.lang.Thread.run(Thread.java:701)


Stderr:

SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/filecache/17/spark-assembly-0.8.1-incubating-hadoop2.2.0.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/home/hduser/DataAnalysis/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/tmp/hadoop-hduser/nm-local-dir/usercache/hduser/filecache/16/spark-examples-assembly-0.8.1-incubating.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 13/12/29 23:33:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 13/12/29 23:33:37 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1388331358746_0005_000001 13/12/29 23:33:37 INFO client.RMProxy: Connecting to ResourceManager at sp081/10.20.20.7:8030 13/12/29 23:33:37 INFO yarn.ApplicationMaster: Starting the user JAR in a separate Thread 13/12/29 23:33:37 INFO yarn.ApplicationMaster: Waiting for Spark driver to be reachable. 13/12/29 23:33:37 WARN yarn.ApplicationMaster: Failed to connect to driver at null:null, retrying ... 13/12/29 23:33:38 INFO slf4j.Slf4jEventHandler: Slf4jEventHandler started 13/12/29 23:33:38 WARN yarn.ApplicationMaster: Failed to connect to driver at sp081:0, retrying ... 13/12/29 23:33:38 WARN yarn.ApplicationMaster: Failed to connect to driver at sp081:0, retrying ... 13/12/29 23:33:38 WARN yarn.ApplicationMaster: Failed to connect to driver at sp081:0, retrying ... 13/12/29 23:33:38 INFO spark.SparkEnv: Registering BlockManagerMaster 13/12/29 23:33:38 INFO storage.DiskBlockManager: Created local directory at /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1388331358746_0005/spark-local-20131229233338-f2eb 13/12/29 23:33:38 INFO yarn.ApplicationMaster: Waiting for Spark context initialization 13/12/29 23:33:38 INFO yarn.ApplicationMaster: Waiting for Spark context initialization ... 0 13/12/29 23:33:38 INFO storage.MemoryStore: MemoryStore started with capacity 2.3 GB. 13/12/29 23:33:38 INFO network.ConnectionManager: Bound socket to port 42477 with id = ConnectionManagerId(sp081,42477) 13/12/29 23:33:38 INFO storage.BlockManagerMaster: Trying to register BlockManager 13/12/29 23:33:38 INFO storage.BlockManagerMasterActor$BlockManagerInfo: Registering block manager sp081:42477 with 2.3 GB RAM 13/12/29 23:33:38 INFO storage.BlockManagerMaster: Registered BlockManager 13/12/29 23:33:39 INFO server.Server: jetty-7.x.y-SNAPSHOT 13/12/29 23:33:39 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:50409 13/12/29 23:33:39 INFO broadcast.HttpBroadcast: Broadcast server started at http://10.20.20.7:50409 13/12/29 23:33:39 INFO spark.SparkEnv: Registering MapOutputTracker 13/12/29 23:33:39 INFO spark.HttpFileServer: HTTP File server directory is /tmp/hadoop-hduser/nm-local-dir/usercache/hduser/appcache/application_1388331358746_0005/container_1388331358746_0005_01_000001/tmp/spark-fb12631a-78e6-492e-823c-fc0e0ae8c835 13/12/29 23:33:39 INFO server.Server: jetty-7.x.y-SNAPSHOT 13/12/29 23:33:39 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:42845 13/12/29 23:33:39 INFO server.Server: jetty-7.x.y-SNAPSHOT 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/storage/rdd,null} 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/storage,null} 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/stages/stage,null} 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/stages/pool,null} 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/stages,null} 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/environment,null} 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/executors,null} 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/metrics/json,null} 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/static,null} 13/12/29 23:33:39 INFO handler.ContextHandler: started o.e.j.s.h.ContextHandler{/,null} 13/12/29 23:33:39 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040 13/12/29 23:33:39 INFO ui.SparkUI: Started Spark Web UI at http://sp081:4040 13/12/29 23:33:39 ERROR spark.SparkContext: Error adding jar (java.io.FileNotFoundException: spark-examples-assembly-0.8.1-incubating.jar (No such file or directory)), was the --addJars option used? 13/12/29 23:33:39 INFO yarn.ApplicationMaster: finishApplicationMaster with FAILED 13/12/29 23:33:39 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.

13/12/29 23:35:18 WARN yarn.ApplicationMaster: Unable to retrieve SparkContext inspite of waiting for 100000, maxNumTries = 10 13/12/29 23:35:18 INFO yarn.ApplicationMaster: Registering the ApplicationMaster 13/12/29 23:35:18 INFO yarn.ApplicationMaster: Allocating 3 workers. 13/12/29 23:35:18 INFO yarn.YarnAllocationHandler: Will Allocate 3 worker containers, each with 2432 memory 13/12/29 23:35:18 INFO yarn.YarnAllocationHandler: Container request (host: Any, priority: 1, capability: 13/12/29 23:35:18 INFO yarn.YarnAllocationHandler: Container request (host: Any, priority: 1, capability: 13/12/29 23:35:18 INFO yarn.YarnAllocationHandler: Container request (host: Any, priority: 1, capability: 13/12/29 23:35:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered. Exception in thread "Thread-3" java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:622) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:174) Caused by: java.io.FileNotFoundException: spark-examples-assembly-0.8.1-incubating.jar (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.(FileInputStream.java:140) at com.google.common.io.Files$FileByteSource.openStream(Files.java:124) at com.google.common.io.Files$FileByteSource.openStream(Files.java:114) at com.google.common.io.ByteSource.copyTo(ByteSource.java:202) at com.google.common.io.Files.copy(Files.java:436) at org.apache.spark.HttpFileServer.addFileToDir(HttpFileServer.scala:59) at org.apache.spark.HttpFileServer.addJar(HttpFileServer.scala:54) at org.apache.spark.SparkContext.addJar(SparkContext.scala:751) at org.apache.spark.SparkContext$$anonfun$3.apply(SparkContext.scala:129) at org.apache.spark.SparkContext$$anonfun$3.apply(SparkContext.scala:129) at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59) at scala.collection.immutable.List.foreach(List.scala:76) at org.apache.spark.SparkContext.(SparkContext.scala:129) at org.apache.spark.examples.SparkPi$.main(SparkPi.scala:31) at org.apache.spark.examples.SparkPi.main(SparkPi.scala) ... 5 more 13/12/29 23:35:19 INFO yarn.ApplicationMaster: All workers have launched. 13/12/29 23:35:19 INFO yarn.ApplicationMaster: AppMaster received a signal. 13/12/29 23:35:19 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1388331358746_0005

My environment variables:

SPARK_EXAMPLES_JAR=/home/hduser/DataAnalysis/spark/examples/target/scala-2.9.3/spark-examples-assembly-0.8.1-incubating.jar INSTALL_DIR=/home/hduser/DataAnalysis HADOOP_PREFIX=/home/hduser/DataAnalysis/hadoop YARN_HOME=/home/hduser/DataAnalysis/hadoop HADOOP_HDFS_HOME=/home/hduser/DataAnalysis/hadoop SPARK_DIR=/home/hduser/DataAnalysis/spark HADOOP_COMMON_HOME=/home/hduser/DataAnalysis/hadoop JAVA_HOME=/usr/lib/jvm/default-java HADOOP_CONF_DIR=/home/hduser/DataAnalysis/hadoop/etc/hadoop YARN_CONF_DIR=/home/hduser/DataAnalysis/hadoop/etc/hadoop HADOOP_MAPRED_HOME=/home/hduser/DataAnalysis/hadoop SCALA_HOME=/home/hduser/DataAnalysis/scala HADOOP_DIR=/home/hduser/DataAnalysis/hadoop

What am I missing here?

Best
Izhar

--
You received this message because you are subscribed to a topic in the Google Groups "Spark Users" group.
To unsubscribe from this topic, visit https://groups.google.com/d/topic/spark-users/Bnpz1dAfNgQ/unsubscribe.
To unsubscribe from this group and all its topics, send an email to [hidden email].
For more options, visit https://groups.google.com/groups/opt_out.