Monday, August 10, 2015

Running Spark on YARN


                                           

              Running Spark on YARN



There are two deploy modes that can be used to launch Spark applications on YARN. 
In yarn-cluster mode, the Spark driver runs inside an application master process which is managed by YARN on the cluster, and the client can go away after initiating the application. 
In yarn-clientmode, the driver runs in the client process, and the application master is only used for requesting resources from YARN.
Unlike in Spark standalone mode, in which the master’s address is specified in the --master parameter, in YARN mode the ResourceManager’s address is picked up from the Hadoop configuration.


Spark is distributed as two separate packages:
  • mapr-spark
  • mapr-spark-historyserver for Spark History Server (optional)


Note :- Its assumed you have all the core mapr packages along with Yarn packages installed and the cluster has warden stopped to now install and configure spark on the nodes.

1)  On one node you would install spark and spark history server package while rest nodes you can just install spark package

yum install mapr-spark mapr-spark-historyserver -y
yum install mapr-spark -y


2)  Run the configure.sh command:

[root@yarn1 ~]#/opt/mapr/server/configure.sh -R
Configuring Hadoop-2.5.1 at /opt/mapr/hadoop/hadoop-2.5.1
Done configuring Hadoop
Node setup configuration:  cldb fileserver historyserver nodemanager spark-historyserver webserver zookeeper
Log can be found at:  /opt/mapr/logs/configure.log

Thats all, Spark is preconfigured for YARN and does not require any additional configuration to run. To test the installation, run the following command:

[root@yarn1 ~]# su - mapr

[mapr@yarn1 ~]$ MASTER=yarn-cluster /opt/mapr/spark/spark-1.2.1/bin/run-example org.apache.spark.examples.SparkPi
Spark assembly has been built with Hive, including Datanucleus jars on classpath
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/mapr/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/mapr/spark/spark-1.2.1/lib/spark-assembly-1.2.1-hadoop2.5.1-mapr-1501.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/08/09 21:18:17 INFO client.RMProxy: Connecting to ResourceManager at /10.10.70.118:8032
15/08/09 21:18:18 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
15/08/09 21:18:18 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
15/08/09 21:18:18 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
15/08/09 21:18:18 INFO yarn.Client: Setting up container launch context for our AM
15/08/09 21:18:18 INFO yarn.Client: Preparing resources for our AM container
15/08/09 21:18:18 INFO yarn.Client: Uploading resource file:///opt/mapr/spark/spark-1.2.1/lib/spark-assembly-1.2.1-hadoop2.5.1-mapr-1501.jar -> maprfs:/user/mapr/.sparkStaging/application_1439183740763_0002/spark-assembly-1.2.1-hadoop2.5.1-mapr-1501.jar
15/08/09 21:18:23 INFO yarn.Client: Uploading resource file:/opt/mapr/spark/spark-1.2.1/lib/spark-examples-1.2.1-hadoop2.5.1-mapr-1501.jar -> maprfs:/user/mapr/.sparkStaging/application_1439183740763_0002/spark-examples-1.2.1-hadoop2.5.1-mapr-1501.jar
15/08/09 21:18:29 INFO yarn.Client: Setting up the launch environment for our AM container
15/08/09 21:18:29 INFO spark.SecurityManager: Changing view acls to: mapr
15/08/09 21:18:29 INFO spark.SecurityManager: Changing modify acls to: mapr
15/08/09 21:18:29 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mapr); users with modify permissions: Set(mapr)
15/08/09 21:18:29 INFO yarn.Client: Submitting application 2 to ResourceManager
15/08/09 21:18:29 INFO security.ExternalTokenManagerFactory: Initialized external token manager class - com.mapr.hadoop.yarn.security.MapRTicketManager
15/08/09 21:18:29 INFO impl.YarnClientImpl: Submitted application application_1439183740763_0002
15/08/09 21:18:30 INFO yarn.Client: Application report for application_1439183740763_0002 (state: ACCEPTED)
15/08/09 21:18:30 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: N/A
ApplicationMaster RPC port: -1
queue: root.mapr
start time: 1439183909584
final status: UNDEFINED
tracking URL: http://yarn2:8088/proxy/application_1439183740763_0002/
user: mapr
15/08/09 21:18:31 INFO yarn.Client: Application report for application_1439183740763_0002 (state: ACCEPTED)
15/08/09 21:18:32 INFO yarn.Client: Application report for application_1439183740763_0002 (state: ACCEPTED)
15/08/09 21:18:40 INFO yarn.Client: Application report for application_1439183740763_0002 (state: RUNNING)
15/08/09 21:18:40 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: yarn1
ApplicationMaster RPC port: 0
queue: root.mapr
start time: 1439183909584
final status: UNDEFINED
tracking URL: http://yarn2:8088/proxy/application_1439183740763_0002/
user: mapr
15/08/09 21:18:41 INFO yarn.Client: Application report for application_1439183740763_0002 (state: RUNNING)
15/08/09 21:18:42 INFO yarn.Client: Application report for application_1439183740763_0002 (state: RUNNING)
15/08/09 21:18:43 INFO yarn.Client: Application report for application_1439183740763_0002 (state: RUNNING)
15/08/09 21:18:44 INFO yarn.Client: Application report for application_1439183740763_0002 (state: RUNNING)
15/08/09 21:18:45 INFO yarn.Client: Application report for application_1439183740763_0002 (state: RUNNING)
15/08/09 21:18:55 INFO yarn.Client: Application report for application_1439183740763_0002 (state: RUNNING)
15/08/09 21:18:56 INFO yarn.Client: Application report for application_1439183740763_0002 (state: FINISHED)
15/08/09 21:18:56 INFO yarn.Client:
client token: N/A
diagnostics: N/A
ApplicationMaster host: yarn1
ApplicationMaster RPC port: 0
queue: root.mapr
start time: 1439183909584
final status: SUCCEEDED
tracking URL: http://yarn2:8088/proxy/application_1439183740763_0002/history/application_1439183740763_0002
user: mapr

Sunday, August 9, 2015

Installing Spark in Standalone mode



                                      Installing Spark in Standalone mode


This post has instructions for installing and running Spark 1.2.1 in standalone mode on 4.1 MapR cluster.
Spark runs on the cluster directly and don't need MR framework to run jobs (It has its own execution engine ). Spark is distributed as three separate packages:
  • mapr-spark for Spark worker nodes .
  • mapr-spark-master for Spark master nodes .
  • mapr-spark-historyserver for Spark History Server ( Used to view jobs run earlier )


Note :- This post assumes you have mapr core packages installed on the nodes , nodes configured correctly and warden just stopped to install spark on the cluster.

1)  On one of the node install Spark master,  Spark worker and Spark History Server packages .

yum install mapr-spark mapr-spark-master mapr-spark-historyserver -y

2)  While rest nodes just Spark worker packages would be enough.

yum install mapr-spark -y

3)  We can verify packages are installed correctly on all nodes via clush at once as below.

[root@node1 ~]# clush -ab
Enter 'quit' to leave this interactive mode
Working with nodes: node[1-4]
clush> rpm -qa| grep spark
---------------
node[1,3-4] (3)
---------------
mapr-spark-1.2.1.201506091827-1.noarch
---------------
node2
---------------
mapr-spark-1.2.1.201506091827-1.noarch
mapr-spark-master-1.2.1.201506091827-1.noarch
mapr-spark-historyserver-1.2.1.201506091827-1.noarch
clush> quit


4)  Now , run configure.sh script on Master node for Nodes to realize they have a new Spark package installed and get listed in roles .

[root@node2 ~]# /opt/mapr/server/configure.sh -R
Configuring Hadoop-2.5.1 at /opt/mapr/hadoop/hadoop-2.5.1
Done configuring Hadoop
Node setup configuration:  cldb fileserver hivemetastore hiveserver2 spark-historyserver spark-master tasktracker zookeeper
Log can be found at:  /opt/mapr/logs/configure.log

5)  Now we can make a slaves file and add all the hostname in the cluster where spark slaves/worker are installed.

[root@node2 ~]# vi /opt/mapr/spark/spark-1.2.1/conf/slaves
localhost
node1
node3
node4

6)  Now make sure trust exist from Spark Master node to all worker nodes for user MapR. Below is blog to create trust quickly.(  su - mapr )

http://abizeradenwala.blogspot.com/2015/07/creating-ssh-trust-quickly.html

Note :-  you can verify trust by sshing into other hosts without password for user mapr.

7)  Now starting warden on Master node should bring up Spark Master and History server as well.

[root@node2 ~]# service mapr-warden start
Starting WARDEN, logging to /opt/mapr/logs/warden.log.
For diagnostics look at /opt/mapr/logs/ for createsystemvolumes.log, warden.log and configured services log files

Note:- Warden service on rest nodes can be started as well and make sure the cluster is fully up.

8) Spark worker services on all slave machines can be started via start-slaves.sh script.

[mapr@node2 ~]$ /opt/mapr/spark/spark-1.2.1/sbin/start-slaves.sh
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /opt/mapr/spark/spark-1.2.1/sbin/../logs/spark-mapr-org.apache.spark.deploy.worker.Worker-1-node2.mycluster.com.out
node1: starting org.apache.spark.deploy.worker.Worker, logging to /opt/mapr/spark/spark-1.2.1/logs/spark-mapr-org.apache.spark.deploy.worker.Worker-1-node1.mycluster.com.out
node3: starting org.apache.spark.deploy.worker.Worker, logging to /opt/mapr/spark/spark-1.2.1/logs/spark-mapr-org.apache.spark.deploy.worker.Worker-1-node3.mycluster.com.out
node4: starting org.apache.spark.deploy.worker.Worker, logging to /opt/mapr/spark/spark-1.2.1/logs/spark-mapr-org.apache.spark.deploy.worker.Worker-1-node4.mycluster.com.out


9)  Once Spark master and all Spark worker nodes are up we can run a sample Pi job to make sure spark cluster is functional.

[mapr@node2 logs]$ MASTER=spark://node2:7077  /opt/mapr/spark/spark-1.2.1/bin/run-example org.apache.spark.examples.SparkPi
Spark assembly has been built with Hive, including Datanucleus jars on classpath
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/mapr/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/mapr/spark/spark-1.2.1/lib/spark-assembly-1.2.1-hadoop2.5.1-mapr-1501.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
15/08/07 19:42:25 INFO spark.SparkContext: Spark configuration:
spark.app.name=Spark Pi
spark.eventLog.dir=maprfs:///apps/spark
spark.eventLog.enabled=true
spark.executor.extraClassPath=
spark.executor.memory=2g
spark.jars=file:/opt/mapr/spark/spark-1.2.1/lib/spark-examples-1.2.1-hadoop2.5.1-mapr-1501.jar
spark.logConf=true
spark.master=spark://node2:7077
spark.yarn.historyServer.address=http://node2:18080
15/08/07 19:42:25 INFO spark.SecurityManager: Changing view acls to: mapr
15/08/07 19:42:25 INFO spark.SecurityManager: Changing modify acls to: mapr
15/08/07 19:42:25 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mapr); users with modify permissions: Set(mapr)
15/08/07 19:42:26 INFO slf4j.Slf4jLogger: Slf4jLogger started
15/08/07 19:42:26 INFO Remoting: Starting remoting
15/08/07 19:42:26 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@node2:33543]
15/08/07 19:42:26 INFO util.Utils: Successfully started service 'sparkDriver' on port 33543.
15/08/07 19:42:26 INFO spark.SparkEnv: Registering MapOutputTracker
15/08/07 19:42:26 INFO spark.SparkEnv: Registering BlockManagerMaster
15/08/07 19:42:26 INFO storage.DiskBlockManager: Created local directory at /tmp/spark-54a795ed-25cf-44b1-80d5-e90e45e86e60/spark-eb8fdea4-9623-4a68-89a7-1b3e0314ccaa
15/08/07 19:42:26 INFO storage.MemoryStore: MemoryStore started with capacity 265.4 MB
15/08/07 19:42:27 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-ea8c7aa1-0ddd-47ff-9e8b-33f9d7d84ac6/spark-c735f7f9-a4f5-4a2d-af8f-6d4c1b770d59
15/08/07 19:42:27 INFO spark.HttpServer: Starting HTTP Server
15/08/07 19:42:27 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/08/07 19:42:27 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:37798
15/08/07 19:42:27 INFO util.Utils: Successfully started service 'HTTP file server' on port 37798.
15/08/07 19:42:27 INFO server.Server: jetty-8.y.z-SNAPSHOT
15/08/07 19:42:27 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
15/08/07 19:42:27 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
15/08/07 19:42:27 INFO ui.SparkUI: Started SparkUI at http://node2:4040
15/08/07 19:42:28 INFO spark.SparkContext: Added JAR file:/opt/mapr/spark/spark-1.2.1/lib/spark-examples-1.2.1-hadoop2.5.1-mapr-1501.jar at http://10.10.70.107:37798/jars/spark-examples-1.2.1-hadoop2.5.1-mapr-1501.jar with timestamp 1439005348176
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Connecting to master spark://node2:7077...
15/08/07 19:42:28 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20150807194228-0000
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor added: app-20150807194228-0000/0 on worker-20150807192803-node1-50432 (node1:50432) with 2 cores
15/08/07 19:42:28 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150807194228-0000/0 on hostPort node1:50432 with 2 cores, 2.0 GB RAM
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor added: app-20150807194228-0000/1 on worker-20150807192805-node2-59112 (node2:59112) with 2 cores
15/08/07 19:42:28 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150807194228-0000/1 on hostPort node2:59112 with 2 cores, 2.0 GB RAM
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor added: app-20150807194228-0000/2 on worker-20150807192808-node4-57599 (node4:57599) with 2 cores
15/08/07 19:42:28 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150807194228-0000/2 on hostPort node4:57599 with 2 cores, 2.0 GB RAM
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor added: app-20150807194228-0000/3 on worker-20150807192729-node2-57531 (node2:57531) with 2 cores
15/08/07 19:42:28 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150807194228-0000/3 on hostPort node2:57531 with 2 cores, 2.0 GB RAM
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor added: app-20150807194228-0000/4 on worker-20150807192805-node3-38257 (node3:38257) with 2 cores
15/08/07 19:42:28 INFO cluster.SparkDeploySchedulerBackend: Granted executor ID app-20150807194228-0000/4 on hostPort node3:38257 with 2 cores, 2.0 GB RAM
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/0 is now RUNNING
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/1 is now RUNNING
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/2 is now RUNNING
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/0 is now LOADING
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/3 is now RUNNING
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/2 is now LOADING
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/4 is now RUNNING
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/4 is now LOADING
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/1 is now LOADING
15/08/07 19:42:28 INFO client.AppClient$ClientActor: Executor updated: app-20150807194228-0000/3 is now LOADING
15/08/07 19:42:28 INFO netty.NettyBlockTransferService: Server created on 37575
15/08/07 19:42:28 INFO storage.BlockManagerMaster: Trying to register BlockManager
15/08/07 19:42:28 INFO storage.BlockManagerMasterActor: Registering block manager node2:37575 with 265.4 MB RAM, BlockManagerId(<driver>, node2, 37575)
15/08/07 19:42:28 INFO storage.BlockManagerMaster: Registered BlockManager
15/08/07 19:42:29 INFO scheduler.EventLoggingListener: Logging events to maprfs:///apps/spark/app-20150807194228-0000
15/08/07 19:42:30 INFO cluster.SparkDeploySchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0
15/08/07 19:42:30 INFO spark.SparkContext: Starting job: reduce at SparkPi.scala:35
15/08/07 19:42:30 INFO scheduler.DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 2 output partitions (allowLocal=false)
15/08/07 19:42:30 INFO scheduler.DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
15/08/07 19:42:30 INFO scheduler.DAGScheduler: Parents of final stage: List()
15/08/07 19:42:30 INFO scheduler.DAGScheduler: Missing parents: List()
15/08/07 19:42:30 INFO scheduler.DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map at SparkPi.scala:31), which has no missing parents
15/08/07 19:42:31 INFO storage.MemoryStore: ensureFreeSpace(1728) called with curMem=0, maxMem=278302556
15/08/07 19:42:31 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1728.0 B, free 265.4 MB)
15/08/07 19:42:31 INFO storage.MemoryStore: ensureFreeSpace(1126) called with curMem=1728, maxMem=278302556
15/08/07 19:42:31 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1126.0 B, free 265.4 MB)
15/08/07 19:42:31 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on node2:37575 (size: 1126.0 B, free: 265.4 MB)
15/08/07 19:42:31 INFO storage.BlockManagerMaster: Updated info of block broadcast_0_piece0
15/08/07 19:42:31 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:838
15/08/07 19:42:31 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 0 (MappedRDD[1] at map at SparkPi.scala:31)
15/08/07 19:42:31 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/08/07 19:42:33 INFO cluster.SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@node4:45758/user/Executor#-2015134222] with ID 2
15/08/07 19:42:33 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, node4, PROCESS_LOCAL, 1347 bytes)
15/08/07 19:42:33 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, node4, PROCESS_LOCAL, 1347 bytes)
15/08/07 19:42:33 INFO cluster.SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@node1:33791/user/Executor#-776059221] with ID 0
15/08/07 19:42:33 INFO cluster.SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@node3:35539/user/Executor#790771493] with ID 4
15/08/07 19:42:33 INFO storage.BlockManagerMasterActor: Registering block manager node4:54576 with 1060.3 MB RAM, BlockManagerId(2, node4, 54576)
15/08/07 19:42:33 INFO storage.BlockManagerMasterActor: Registering block manager node1:33323 with 1060.3 MB RAM, BlockManagerId(0, node1, 33323)
15/08/07 19:42:35 INFO storage.BlockManagerMasterActor: Registering block manager node3:56516 with 1060.3 MB RAM, BlockManagerId(4, node3, 56516)
15/08/07 19:42:37 INFO cluster.SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@node2:51330/user/Executor#162860550] with ID 3
15/08/07 19:42:37 INFO cluster.SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://sparkExecutor@node2:50174/user/Executor#1110281472] with ID 1
15/08/07 19:42:38 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on node4:54576 (size: 1126.0 B, free: 1060.3 MB)
15/08/07 19:42:39 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 5832 ms on node4 (1/2)
15/08/07 19:42:39 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 5815 ms on node4 (2/2)
15/08/07 19:42:39 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/08/07 19:42:39 INFO scheduler.DAGScheduler: Stage 0 (reduce at SparkPi.scala:35) finished in 7.855 s
15/08/07 19:42:39 INFO scheduler.DAGScheduler: Job 0 finished: reduce at SparkPi.scala:35, took 8.736218 s
Pi is roughly 3.13904
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/metrics/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/kill,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/static,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/threadDump/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/threadDump,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/executors,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/environment/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/environment,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/rdd/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/rdd,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/storage,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/pool/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/pool,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/stage,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/stages,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs/job/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs/job,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs/json,null}
15/08/07 19:42:39 INFO handler.ContextHandler: stopped o.e.j.s.ServletContextHandler{/jobs,null}
15/08/07 19:42:39 INFO ui.SparkUI: Stopped Spark web UI at http://node2:4040
15/08/07 19:42:39 INFO scheduler.DAGScheduler: Stopping DAGScheduler
15/08/07 19:42:39 INFO cluster.SparkDeploySchedulerBackend: Shutting down all executors
15/08/07 19:42:39 INFO cluster.SparkDeploySchedulerBackend: Asking each executor to shut down
15/08/07 19:42:40 INFO storage.BlockManagerMasterActor: Registering block manager node2:49489 with 1060.3 MB RAM, BlockManagerId(1, node2, 49489)
15/08/07 19:42:40 INFO storage.BlockManagerMasterActor: Registering block manager node2:59803 with 1060.3 MB RAM, BlockManagerId(3, node2, 59803)
15/08/07 19:42:40 INFO spark.MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
15/08/07 19:42:40 INFO storage.MemoryStore: MemoryStore cleared
15/08/07 19:42:40 INFO storage.BlockManager: BlockManager stopped
15/08/07 19:42:40 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
15/08/07 19:42:40 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
15/08/07 19:42:40 INFO spark.SparkContext: Successfully stopped SparkContext
15/08/07 19:42:40 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
15/08/07 19:42:40 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.


As seen above Pi job is successful and does give Pi value roughly as 3.13904 which confirms our installation was successfull and Spark in Standalone mode is configured successfully.