Kubernetes security context when submitting job through k8s servers

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

Kubernetes security context when submitting job through k8s servers

trung kien
Dear all,

Is there any way to includes security context (https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) when submitting job through k8s servers?

I'm trying to first spark jobs on Kubernetes through spark-submit:

bin/spark-submit --master k8s://https://API_SERVERS --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.kubernetes.namespace=NAMESPACE --conf spark.executor.instances=3 --conf spark.kubernetes.container.image=<SPARK_IMAGE> --conf spark.kubernetes.driver.pod.name=spark-pi-driver local:///opt/spark/examples/jars/spark-examples_2.11-2.3.1.jar

But the job was rejected because the pod (created by spark-submit) doesn't have security context to run as my account (Our policy doesn't allow us to runAsUser root)

I check the code under KubernetesClientApplication.scala, it doesn't seems to support security context by configuration.

Is there any solution to get arround this issue? is there any patch that support this?

--
Thanks
Kien
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes security context when submitting job through k8s servers

Yinan Li
Spark on k8s currently doesn't support specifying a custom SecurityContext of the driver/executor pods. This will be supported by the solution to https://issues.apache.org/jira/browse/SPARK-24434.

On Mon, Jul 9, 2018 at 2:06 PM trung kien <[hidden email]> wrote:
Dear all,

Is there any way to includes security context (https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) when submitting job through k8s servers?

I'm trying to first spark jobs on Kubernetes through spark-submit:

bin/spark-submit --master k8s://https://API_SERVERS --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.kubernetes.namespace=NAMESPACE --conf spark.executor.instances=3 --conf spark.kubernetes.container.image=<SPARK_IMAGE> --conf spark.kubernetes.driver.pod.name=spark-pi-driver local:///opt/spark/examples/jars/spark-examples_2.11-2.3.1.jar

But the job was rejected because the pod (created by spark-submit) doesn't have security context to run as my account (Our policy doesn't allow us to runAsUser root)

I check the code under KubernetesClientApplication.scala, it doesn't seems to support security context by configuration.

Is there any solution to get arround this issue? is there any patch that support this?

--
Thanks
Kien
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes security context when submitting job through k8s servers

trung kien
Thanks Li,

Inread through the ticket, be able to pass pod YAML file would be amazing.

Do you have any target date for production or incubator? I really want to try out this feature.

On Mon, Jul 9, 2018 at 4:48 PM Yinan Li <[hidden email]> wrote:
Spark on k8s currently doesn't support specifying a custom SecurityContext of the driver/executor pods. This will be supported by the solution to https://issues.apache.org/jira/browse/SPARK-24434.

On Mon, Jul 9, 2018 at 2:06 PM trung kien <[hidden email]> wrote:
Dear all,

Is there any way to includes security context (https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) when submitting job through k8s servers?

I'm trying to first spark jobs on Kubernetes through spark-submit:

bin/spark-submit --master k8s://https://API_SERVERS --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.kubernetes.namespace=NAMESPACE --conf spark.executor.instances=3 --conf spark.kubernetes.container.image=<SPARK_IMAGE> --conf spark.kubernetes.driver.pod.name=spark-pi-driver local:///opt/spark/examples/jars/spark-examples_2.11-2.3.1.jar

But the job was rejected because the pod (created by spark-submit) doesn't have security context to run as my account (Our policy doesn't allow us to runAsUser root)

I check the code under KubernetesClientApplication.scala, it doesn't seems to support security context by configuration.

Is there any solution to get arround this issue? is there any patch that support this?

--
Thanks
Kien
--
Thanks
Kien
Reply | Threaded
Open this post in threaded view
|

Re: Kubernetes security context when submitting job through k8s servers

Yinan Li
It's still under design review. It's unlikely that it will go into 2.4.

On Mon, Jul 9, 2018 at 3:46 PM trung kien <[hidden email]> wrote:
Thanks Li,

Inread through the ticket, be able to pass pod YAML file would be amazing.

Do you have any target date for production or incubator? I really want to try out this feature.

On Mon, Jul 9, 2018 at 4:48 PM Yinan Li <[hidden email]> wrote:
Spark on k8s currently doesn't support specifying a custom SecurityContext of the driver/executor pods. This will be supported by the solution to https://issues.apache.org/jira/browse/SPARK-24434.

On Mon, Jul 9, 2018 at 2:06 PM trung kien <[hidden email]> wrote:
Dear all,

Is there any way to includes security context (https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) when submitting job through k8s servers?

I'm trying to first spark jobs on Kubernetes through spark-submit:

bin/spark-submit --master k8s://https://API_SERVERS --deploy-mode cluster --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.kubernetes.namespace=NAMESPACE --conf spark.executor.instances=3 --conf spark.kubernetes.container.image=<SPARK_IMAGE> --conf spark.kubernetes.driver.pod.name=spark-pi-driver local:///opt/spark/examples/jars/spark-examples_2.11-2.3.1.jar

But the job was rejected because the pod (created by spark-submit) doesn't have security context to run as my account (Our policy doesn't allow us to runAsUser root)

I check the code under KubernetesClientApplication.scala, it doesn't seems to support security context by configuration.

Is there any solution to get arround this issue? is there any patch that support this?

--
Thanks
Kien
--
Thanks
Kien