Standalone cluster setup: binding to private IP

classic Classic list List threaded Threaded
11 messages Options
Reply | Threaded
Open this post in threaded view
|

Standalone cluster setup: binding to private IP

David Thomas
I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?
Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

Nan Zhu
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?

Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

David Thomas
That didn't work. I listed private IPs of the worker nodes in conf/slaves file on the master node and ran sbin/start-slaves.sh. But I still get the same error on the worker nodes: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<Public IP>:0 


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <[hidden email]> wrote:
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?


Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

Nan Zhu
Oh, sorry, I misunderstoodd your question

I thought you are asking how to start worker processes from master node

so you can actually remotely start process in master node, but receives exception in starting process?

because Spark uses the IP address of your first NIC card to start process by default

so you can either ensure that your default NIC is the one binding with private IP or set SPARK_LOCAL_IP to your private address in worker nodes

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:27 PM, David Thomas wrote:

That didn't work. I listed private IPs of the worker nodes in conf/slaves file on the master node and ran sbin/start-slaves.sh. But I still get the same error on the worker nodes: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<Public IP>:0 


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <[hidden email]> wrote:
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?



Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

David Thomas
Thanks for your prompt reply.

ensure that your default NIC is the one binding with private IP
Can you give me some pointers on how exactly I can do that.


On Sat, Feb 15, 2014 at 1:49 PM, Nan Zhu <[hidden email]> wrote:
Oh, sorry, I misunderstoodd your question

I thought you are asking how to start worker processes from master node

so you can actually remotely start process in master node, but receives exception in starting process?

because Spark uses the IP address of your first NIC card to start process by default

so you can either ensure that your default NIC is the one binding with private IP or set SPARK_LOCAL_IP to your private address in worker nodes

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:27 PM, David Thomas wrote:

That didn't work. I listed private IPs of the worker nodes in conf/slaves file on the master node and ran sbin/start-slaves.sh. But I still get the same error on the worker nodes: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<Public IP>:0 


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <[hidden email]> wrote:
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?




Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

Nan Zhu
Hi, 

which one is your default NIC depends on your default gateway setup

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:55 PM, David Thomas wrote:

Thanks for your prompt reply.

ensure that your default NIC is the one binding with private IP
Can you give me some pointers on how exactly I can do that.


On Sat, Feb 15, 2014 at 1:49 PM, Nan Zhu <[hidden email]> wrote:
Oh, sorry, I misunderstoodd your question

I thought you are asking how to start worker processes from master node

so you can actually remotely start process in master node, but receives exception in starting process?

because Spark uses the IP address of your first NIC card to start process by default

so you can either ensure that your default NIC is the one binding with private IP or set SPARK_LOCAL_IP to your private address in worker nodes

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:27 PM, David Thomas wrote:

That didn't work. I listed private IPs of the worker nodes in conf/slaves file on the master node and ran sbin/start-slaves.sh. But I still get the same error on the worker nodes: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<Public IP>:0 


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <[hidden email]> wrote:
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?





Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

Izhar ul Hassan
Hi,

you can export the IP address to bind to in conf/spark-env.sh

export SPARK_MASTER_IP=YourPrivateIP

If you are setting up on debian or similar you can use the scripts:





--
/Izhar 


On Sat, Feb 15, 2014 at 11:37 PM, Nan Zhu <[hidden email]> wrote:
Hi, 

which one is your default NIC depends on your default gateway setup

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:55 PM, David Thomas wrote:

Thanks for your prompt reply.

ensure that your default NIC is the one binding with private IP
Can you give me some pointers on how exactly I can do that.


On Sat, Feb 15, 2014 at 1:49 PM, Nan Zhu <[hidden email]> wrote:
Oh, sorry, I misunderstoodd your question

I thought you are asking how to start worker processes from master node

so you can actually remotely start process in master node, but receives exception in starting process?

because Spark uses the IP address of your first NIC card to start process by default

so you can either ensure that your default NIC is the one binding with private IP or set SPARK_LOCAL_IP to your private address in worker nodes

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:27 PM, David Thomas wrote:

That didn't work. I listed private IPs of the worker nodes in conf/slaves file on the master node and ran sbin/start-slaves.sh. But I still get the same error on the worker nodes: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<Public IP>:0 


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <[hidden email]> wrote:
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?






Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

David Thomas
I've finally been able to setup the cluster. I can access the cluster URL and see alll the workers registered.
Now on the master node, I fire up spark-shell using the command: MASTER=spark://PrivateIP:7077 bin/spark-shell
 
But now, I get the following error:

14/02/15 17:45:04 ERROR Remoting: Remoting error: [Startup failed] [
akka.remote.RemoteTransportException: Startup failed
    at akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:129)
    at akka.remote.Remoting.start(Remoting.scala:194)
    at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
:
org.jboss.netty.channel.ChannelException: Failed to bind to: PublicIP:0
    at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)

Any pointers?


On Sat, Feb 15, 2014 at 3:59 PM, Izhar ul Hassan <[hidden email]> wrote:
Hi,

you can export the IP address to bind to in conf/spark-env.sh

export SPARK_MASTER_IP=YourPrivateIP

If you are setting up on debian or similar you can use the scripts:





--
/Izhar 


On Sat, Feb 15, 2014 at 11:37 PM, Nan Zhu <[hidden email]> wrote:
Hi, 

which one is your default NIC depends on your default gateway setup

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:55 PM, David Thomas wrote:

Thanks for your prompt reply.

ensure that your default NIC is the one binding with private IP
Can you give me some pointers on how exactly I can do that.


On Sat, Feb 15, 2014 at 1:49 PM, Nan Zhu <[hidden email]> wrote:
Oh, sorry, I misunderstoodd your question

I thought you are asking how to start worker processes from master node

so you can actually remotely start process in master node, but receives exception in starting process?

because Spark uses the IP address of your first NIC card to start process by default

so you can either ensure that your default NIC is the one binding with private IP or set SPARK_LOCAL_IP to your private address in worker nodes

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:27 PM, David Thomas wrote:

That didn't work. I listed private IPs of the worker nodes in conf/slaves file on the master node and ran sbin/start-slaves.sh. But I still get the same error on the worker nodes: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<Public IP>:0 


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <[hidden email]> wrote:
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?







Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

Nan Zhu
did you set any env variable to public IP?  like SPARK_LOCAL_IP?

the PublicIP appears again….

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 7:46 PM, David Thomas wrote:

I've finally been able to setup the cluster. I can access the cluster URL and see alll the workers registered.
Now on the master node, I fire up spark-shell using the command: MASTER=spark://PrivateIP:7077 bin/spark-shell
 
But now, I get the following error:

14/02/15 17:45:04 ERROR Remoting: Remoting error: [Startup failed] [
akka.remote.RemoteTransportException: Startup failed
    at akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:129)
    at akka.remote.Remoting.start(Remoting.scala:194)
    at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
:
org.jboss.netty.channel.ChannelException: Failed to bind to: PublicIP:0
    at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)

Any pointers?


On Sat, Feb 15, 2014 at 3:59 PM, Izhar ul Hassan <[hidden email]> wrote:
Hi,

you can export the IP address to bind to in conf/spark-env.sh

export SPARK_MASTER_IP=YourPrivateIP

If you are setting up on debian or similar you can use the scripts:





--
/Izhar 


On Sat, Feb 15, 2014 at 11:37 PM, Nan Zhu <[hidden email]> wrote:
Hi, 

which one is your default NIC depends on your default gateway setup

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:55 PM, David Thomas wrote:

Thanks for your prompt reply.

ensure that your default NIC is the one binding with private IP
Can you give me some pointers on how exactly I can do that.


On Sat, Feb 15, 2014 at 1:49 PM, Nan Zhu <[hidden email]> wrote:
Oh, sorry, I misunderstoodd your question

I thought you are asking how to start worker processes from master node

so you can actually remotely start process in master node, but receives exception in starting process?

because Spark uses the IP address of your first NIC card to start process by default

so you can either ensure that your default NIC is the one binding with private IP or set SPARK_LOCAL_IP to your private address in worker nodes

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:27 PM, David Thomas wrote:

That didn't work. I listed private IPs of the worker nodes in conf/slaves file on the master node and ran sbin/start-slaves.sh. But I still get the same error on the worker nodes: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<Public IP>:0 


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <[hidden email]> wrote:
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?








Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

David Thomas
No, I did not set any env variable to public IP. I verified MASTER variable used in spark-shell and it does contain the private IP.


On Sat, Feb 15, 2014 at 5:52 PM, Nan Zhu <[hidden email]> wrote:
did you set any env variable to public IP?  like SPARK_LOCAL_IP?

the PublicIP appears again….

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 7:46 PM, David Thomas wrote:

I've finally been able to setup the cluster. I can access the cluster URL and see alll the workers registered.
Now on the master node, I fire up spark-shell using the command: MASTER=spark://PrivateIP:7077 bin/spark-shell
 
But now, I get the following error:

14/02/15 17:45:04 ERROR Remoting: Remoting error: [Startup failed] [
akka.remote.RemoteTransportException: Startup failed
    at akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:129)
    at akka.remote.Remoting.start(Remoting.scala:194)
    at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
:
org.jboss.netty.channel.ChannelException: Failed to bind to: PublicIP:0
    at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)

Any pointers?


On Sat, Feb 15, 2014 at 3:59 PM, Izhar ul Hassan <[hidden email]> wrote:
Hi,

you can export the IP address to bind to in conf/spark-env.sh

export SPARK_MASTER_IP=YourPrivateIP

If you are setting up on debian or similar you can use the scripts:





--
/Izhar 


On Sat, Feb 15, 2014 at 11:37 PM, Nan Zhu <[hidden email]> wrote:
Hi, 

which one is your default NIC depends on your default gateway setup

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:55 PM, David Thomas wrote:

Thanks for your prompt reply.

ensure that your default NIC is the one binding with private IP
Can you give me some pointers on how exactly I can do that.


On Sat, Feb 15, 2014 at 1:49 PM, Nan Zhu <[hidden email]> wrote:
Oh, sorry, I misunderstoodd your question

I thought you are asking how to start worker processes from master node

so you can actually remotely start process in master node, but receives exception in starting process?

because Spark uses the IP address of your first NIC card to start process by default

so you can either ensure that your default NIC is the one binding with private IP or set SPARK_LOCAL_IP to your private address in worker nodes

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:27 PM, David Thomas wrote:

That didn't work. I listed private IPs of the worker nodes in conf/slaves file on the master node and ran sbin/start-slaves.sh. But I still get the same error on the worker nodes: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<Public IP>:0 


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <[hidden email]> wrote:
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?









Reply | Threaded
Open this post in threaded view
|

Re: Standalone cluster setup: binding to private IP

David Thomas
I changed the host name of the node to private IP using the command 'sudo hostname privateIP'. I'm sure this may not be the right thing to do, but now the spark shell has connected to the cluster successfully.


On Sat, Feb 15, 2014 at 5:56 PM, David Thomas <[hidden email]> wrote:
No, I did not set any env variable to public IP. I verified MASTER variable used in spark-shell and it does contain the private IP.


On Sat, Feb 15, 2014 at 5:52 PM, Nan Zhu <[hidden email]> wrote:
did you set any env variable to public IP?  like SPARK_LOCAL_IP?

the PublicIP appears again….

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 7:46 PM, David Thomas wrote:

I've finally been able to setup the cluster. I can access the cluster URL and see alll the workers registered.
Now on the master node, I fire up spark-shell using the command: MASTER=spark://PrivateIP:7077 bin/spark-shell
 
But now, I get the following error:

14/02/15 17:45:04 ERROR Remoting: Remoting error: [Startup failed] [
akka.remote.RemoteTransportException: Startup failed
    at akka.remote.Remoting.akka$remote$Remoting$$notifyError(Remoting.scala:129)
    at akka.remote.Remoting.start(Remoting.scala:194)
    at akka.remote.RemoteActorRefProvider.init(RemoteActorRefProvider.scala:184)
:
org.jboss.netty.channel.ChannelException: Failed to bind to: PublicIP:0
    at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
Caused by: java.net.BindException: Cannot assign requested address
    at sun.nio.ch.Net.bind0(Native Method)

Any pointers?


On Sat, Feb 15, 2014 at 3:59 PM, Izhar ul Hassan <[hidden email]> wrote:
Hi,

you can export the IP address to bind to in conf/spark-env.sh

export SPARK_MASTER_IP=YourPrivateIP

If you are setting up on debian or similar you can use the scripts:





--
/Izhar 


On Sat, Feb 15, 2014 at 11:37 PM, Nan Zhu <[hidden email]> wrote:
Hi, 

which one is your default NIC depends on your default gateway setup

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:55 PM, David Thomas wrote:

Thanks for your prompt reply.

ensure that your default NIC is the one binding with private IP
Can you give me some pointers on how exactly I can do that.


On Sat, Feb 15, 2014 at 1:49 PM, Nan Zhu <[hidden email]> wrote:
Oh, sorry, I misunderstoodd your question

I thought you are asking how to start worker processes from master node

so you can actually remotely start process in master node, but receives exception in starting process?

because Spark uses the IP address of your first NIC card to start process by default

so you can either ensure that your default NIC is the one binding with private IP or set SPARK_LOCAL_IP to your private address in worker nodes

Best,

-- 
Nan Zhu

On Saturday, February 15, 2014 at 3:27 PM, David Thomas wrote:

That didn't work. I listed private IPs of the worker nodes in conf/slaves file on the master node and ran sbin/start-slaves.sh. But I still get the same error on the worker nodes: Exception in thread "main" org.jboss.netty.channel.ChannelException: Failed to bind to: slave1/<Public IP>:0 


On Sat, Feb 15, 2014 at 1:21 PM, Nan Zhu <[hidden email]> wrote:
you can list your private IPs in conf/slaves file 

and start daemons with sbin/start-all.sh 

before that you would like to setup passwordless login from your master node to all of your worker nodes

Best,

-- 
Nan Zhu


On Saturday, February 15, 2014 at 3:15 PM, David Thomas wrote:

I have a set of VMs and each VM instance has its own private IP and a publicly accessible IP. When I start the master with default values, it throws bind exception saying it cannot bind to the public IP. So I set the SPARK_MASTER_IP to the private IP and it starts up fine. Now how do I achieve the same for worker nodes? If I run start-slaves.sh, I get the bind exception. I can login to each slave and give -i option for spark-class org.apache.spark.deploy.worker.Worker, but isn't there any other efficient way to start all workers from the master node?