Spark connecting to wrong Filesystem.uri

classic Classic list List threaded Threaded
1 message Options
Reply | Threaded
Open this post in threaded view
|

Spark connecting to wrong Filesystem.uri

Mskh
Hi,

I've had Spark/Shark running successfully on my Hadoop cluster. Due to some reasons, I had to change the IP addresses of my 6 Hadoop nodes and since then, I was unable to create a cached table in memory using Shark.

While 10.14.xx.xx in the first line below is the new address, Shark/Spark is still trying to connect to the old ip address 10.xx.xx.xx which is evident at the bottom of the trace and Shark seems to get stuck at that stage with no cached tables being created.

Could someone please shed some light on this as I'm unable to use Shark/Spark at all in this current state.


This is the output from running shark-withinfo

INFO cfs.CassandraFileSystem: CassandraFileSystem.uri : cfs://10.14.xx.xx/
INFO cfs.CassandraFileSystem: Default block size: 67108864
INFO cfs.CassandraFileSystemThriftStore: Consistency level for reads from cfs: LOCAL_QUORUM
INFO cfs.CassandraFileSystemThriftStore: Consistency level for writes into cfs: LOCAL_QUORUM
INFO cfs.CassandraFileSystemRules: Successfully loaded path rules for: cfs
INFO parse.SharkSemanticAnalyzer: Completed getting MetaData in Semantic Analysis
INFO ppd.OpProcFactory: Processing for FS(2)
INFO ppd.OpProcFactory: Processing for SEL(1)
INFO ppd.OpProcFactory: Processing for TS(0)
INFO metastore.HiveMetaStore: 0: get_database: ly
INFO metastore.HiveMetaStore: 0: get_database: ly
INFO cfs.CassandraFileSystem: CassandraFileSystem.uri : cfs://10.xx.xx.xx/
INFO cfs.CassandraFileSystem: Default block size: 67108864