Running Spark on Google Kubernetes Engine

In this example, we are going to deploy a Spark 2.3 environment on top of Google Kubernetes Engine, similarly, you can try it with any Kubernetes Service you would like to such as Amazon EKS, Azure Container Service, Pivotal Container Service, etc.

Since Spark 2.3 has Kubernetes as a native support, there's almost nothing else to setup. Once Kubernetes cluster is up, it's ready to accept Spark Jobs. The only thing we need to set up is Spark History Server, which allows users to view the completed job logs.

Steps

1. Create Google Kubernetes Engine

Please refer to the Quickstart Guide about how to create a GKE cluster, get authentication credentials for the cluster and use kubectl command line tool.

2. Prepare Spark 2.3

Download Spark 2.3 from Apache Spark website

$ curl -O http://apache.mirrors.lucidnetworks.net/spark/spark-2.3.0/spark-2.3.0-bin-hadoop2.7.tgz
$ tar -zxvf spark-2.3.0-bin-hadoop2.7.tgz
$ ls spark-2.3.0-bin-hadoop2.7
LICENSE    README.md  conf       jars       python
NOTICE     RELEASE    data       kubernetes sbin
R          bin        examples   licenses   yarn

Download Google Cloud Storage Connector for Hadoop 2.x, Hadoop AWS 2.7.x and AWS SDK for Java, copy the jar files to jars/ in Spark, these are the dependency jar files in order to leverage Google Cloud Storage and Amazon S3 as the data/log store.

$ curl -O https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar
$ curl -O http://central.maven.org/maven2/org/apache/hadoop/hadoop-aws/2.7.3/hadoop-aws-2.7.3.jar
$ curl -O https://sdk-for-java.amazonwebservices.com/latest/aws-java-sdk.zip
$ unzip aws-java-sdk.zip
$ cp gcs-connector-latest-hadoop2.jar hadoop-aws-2.7.3.jar aws-java-sdk-1.11.292/lib/aws-java-sdk-1.11.292.jar spark-2.3.0-bin-hadoop2.7/jars/

Build Spark 2.3 Docker Image, Spark 2.3 provides a dockerfile which we can use directly to build docker images. Please refer to Docker Doc about how to install docker on your local laptop and Create Docker ID. Once the image is built, we need to push it to Docker Registry such as Docker Hub, Google Container Registry, Harbor, etc.

$ cd spark-2.3.0-bin-hadoop2.7
$ docker build -f ./kubernetes/dockerfiles/spark/Dockerfile . -t azureq/pantheon:spark-2.3
$ docker push azureq/pantheon:spark-2.3

Now you have a Spark 2.3 docker image with GCS and Amazon S3 connector built in.

2. Deploy Spark History Server On GKE with GCS as the backend storage

Spark History Server is the Web UI for Spark applications and allow users to view after the fact, Spark applications should be configured to log events to a directory which Spark History Server will read from to construct the visualization. The directory can be a local file path, an HDFS path, or any alternative file system supported by Hadoop APIs. In this example, we'll use Google Cloud Storage as the backend storage since we are running a Google Kubernetes Engine cluster, you may also use Amazon S3, Azure Blob Storage, etc.

Setup gsutil and gcloud on your local laptop and associate them with your GCP project, create a bucket, create an IAM service account sparkonk8s, generate a json key file sparkonk8s.json, to grant sparkonk8s admin permission to bucket gs://spark-history-server.

$ gsutil mb -c nearline gs://spark-history-server
$ export ACCOUNT_NAME=sparkonk8s
$ export GCP_PROJECT_ID=qshao
$ gcloud iam service-accounts create ${ACCOUNT_NAME} --display-name "${ACCOUNT_NAME}"
$ gcloud iam service-accounts keys create "${ACCOUNT_NAME}.json" --iam-account "${ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com"
$ gcloud projects add-iam-policy-binding ${GCP_PROJECT_ID} --member "serviceAccount:${ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com" --role roles/storage.admin
$ gsutil iam ch serviceAccount:${ACCOUNT_NAME}@${GCP_PROJECT_ID}.iam.gserviceaccount.com:objectAdmin gs://spark-history-server

Create Kubernetes Secrets with sparkonk8s.json, which will be mounted to Spark Application Pods and Spark History Server Pod later in order to enable them to write/read from gs://spark-history-server.

$ kubectl create secret generic sparklogs --from-file=/Path/to/sparkonk8s.json
$ kubectl describe secret sparklogs
Name:         sparklogs
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
sparkonk8s.json:  2309 bytes

Next, we are going to construct the yml file for deploying a Spark History Server Pod, as you can see in the yml file, we created a Deployment with previously created Spark docker image azureq/pantheon:spark-2.3, mounted GCS Service Account Secret sparkonk8s.json to the Pod, configured the GCS bucket gs://spark-history-server _as the log directory. Also, we created a Service to allow remote access.

kind: Deployment
apiVersion: apps/v1beta1
metadata:
  name: spark-history-server-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      component: spark-history-server
  template:
    metadata:
      labels:
        component: spark-history-server
    spec:
      containers:
        - name: spark-history-server
          image: azureq/pantheon:spark-2.3
          env:
          - name: SPARK_HISTORY_OPTS
            value: "-Dspark.hadoop.google.cloud.auth.service.account.json.keyfile=/etc/secrets/sparkonk8s.json"
          command: ["/opt/spark/bin/spark-class","org.apache.spark.deploy.history.HistoryServer","gs://spark-history-server/"]
          ports:
            - containerPort: 18080
          resources:
            requests:
              cpu: "1"
              memory: "1024Mi"
          volumeMounts:
          - name: sparklogs-secrets
            mountPath: "/etc/secrets"
            readOnly: true
      volumes:
      - name: sparklogs-secrets
        secret:
          secretName: sparklogs
---
kind: Service
apiVersion: v1
metadata:
  name: spark-history-server
spec:
  type: NodePort
  ports:
    - port: 18080
      targetPort: 18080
      name: history
  selector:
    component: spark-history-server

Now, we can deploy Spark History Server Pod on GKE cluster.

$ kubectl create -f spark-history-server.yml
deployment "spark-history-server-deployment" created
service "spark-history-server" created

To access the UI of Spark History Server, we can get the Pod name, use kubectl port-forward to forward port 18080 on the local workstation to port 18080 of Spark History Server Pod, then access http://localhost:18080 in your browser.

$ kubectl get pods
NAME                                               READY     STATUS    RESTARTS   AGE
spark-history-server-deployment-59f65cbf5b-64tgw   1/1       Running   0          8m
$ kubectl port-forward spark-history-server-deployment-59f65cbf5b-64tgw 18080:18080
Forwarding from 127.0.0.1:18080 -> 18080
Handling connection for 18080

3. Submit a Spark Job with event logging enabled and log events to GCS bucket

Now, we can submit a Spark Job to the GKE cluster and view logs after it's finished. As we see in Architecture, Spark driver Pod will request executor Pods from Kubernetes Master(API Server to be specific), it needs a service account with proper permission. We will create a Service Account spark, bind it with a edit ClusterRole and specify it when submitting the job.

Please refer to the official Running Spark On Kubernetes Doc for detailed explanation of all the parameters.

$ kubectl create serviceaccount spark
serviceaccount "spark" created
$ kubectl create clusterrolebinding spark-role --clusterrole=edit --serviceaccount=default:spark --namespace=default
clusterrolebinding "spark-role" created
$ cd spark-2.3.0-bin-hadoop2.7
$ bin/spark-submit \
    --master k8s://https://<Your Kubernetes Master's IP> \
    --deploy-mode cluster \
    --name spark-pi \
    --class org.apache.spark.examples.SparkPi \
    --conf spark.eventLog.enabled=true \
    --conf spark.eventLog.dir=gs://spark-history-server/ \
    --conf spark.executor.instances=5 \
    --conf spark.kubernetes.container.image=azureq/pantheon:spark-2.3 \
    --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
    --conf spark.hadoop.google.cloud.auth.service.account.json.keyfile=/etc/secrets/sparkonk8s.json \
    --conf spark.kubernetes.driver.secrets.sparklogs=/etc/secrets \
    --conf spark.kubernetes.executor.secrets.sparklogs=/etc/secrets \
    local:///opt/spark/examples/jars/spark-examples_2.11-2.3.0.jar

After the Job finished, you'll be able to see its event logs in Spark History Server by accessing http://localhost:18080 (don't forget port forwarding).

results matching ""

    No results matching ""