SPARK

Command Line

Spark provides two main utilities :

  • the first (spark-submit) is used to submit an application to run on the cluster
  • the second is use to get an interactive shell in either scala (using spark-shell) or python (using pyspark). It cand be used for one-off task or for prototyping.

Both utilities support the same command-line options.

Command Operation
spark-submit --master yarn --deploy-mode cluster Submits a Spark application on the cluster
Option Description
--master Either yarn to start the application on the cluster or local[<number of threads>] to start the application locally (for prototyping purposes mainly)
--deploy-mode when --master is yarn, deploy-mode is either cluster or client. With client, the Spark driver(which handles task scheduling and would typically run your main function) runs locally while the rest of the application runs on the cluster. With cluster, the driver also runs within the cluster with the rest of the application. With this mode, the local process only monitors the state of the application and can be stopped without stoping the application. The cluster deploy-mode is not available for the interactive utilities( spark-shell and pyspark)
--class Name of the main class of the application
--jars comma-separated list of additional local jar files to ship with the application jar
--py-files comma-separated list of additional local python files
--driver-memory Memory for the driver container
--driver-cores Number of cores used by the driver
--executor-memory Memory for executor containers
--executor-cores Number of cores used by each executor container
--num-executors Number of executor containers to start
--conf <key=value> set a spark configuration for the application

Python Memory

Memory usage is limited by YARN (Hadoop's resource management system) with regards to the amount of resources requested. When using spark, the amount of memory requested is controlled by the option --driver-memory for the driver (controller) and --executor-memory for the executors(computation). This memory includes cached RDDs(datasets), memory used to execute your (Java or Scala ) code as well as Spark internal fonctions.

The configuration options spark.driver.memoryOverhead and spark.executor.memoryOverhead are added to the amount specified with the previous options. When using java(or scala), these overhead account for native VM overhead, interned strings etc.

When using python, these overhead also include memory used by spawned python processes that execute your code. With python, the memory specified with the options --driver-memory and --executor-memory is used by spark to store persisted RDDs, for shuflling data...

As a consequence, when using python, be sure to set a large enough overhead (using for example --conf spark.driver.memoryOverhead=1G --conf spark.executor.memoryOverhead=1G) for your python tasks to avoid a cancellation of your tasks by YARN because of resource overuse. This cancellation results in erros in Spark such as Container killed by YARN for exceeding memory limits. 2.0 GB of 2 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead

Switching to SPARK3

Apache Spark version 3.1.1 is also available on the platform. To use this version instead of the default 2.3.1 version, you can modify your shell environment using the following bash command :

use_spark3

The following output should validate that version 3 will be use on subsequent spark command use.

Using Spark from /usr/spark3/bin

+ spark-submit --version
Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 3.1.1
      /_/

Using Scala version 2.12.10, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_112
Branch HEAD
Compiled by user ubuntu on 2021-02-22T01:04:02Z
Revision 1d550c4e90275ab418b9161925049239227f3dc9
Url https://github.com/apache/spark
Type --help for more information.

To change bash to spark 2.3.1, use the command

use_spark2

Compatibility changes

Spark 3.0.0 uses scala 2.12 instead of version 2.11 for spark 2.3.1. In addition, spark 3.0.0 deprecates python2 and python3 prior to version 3.6.

Projects archetype

Basic project example are available on gitub.u-bordeaux.fr to ease the creation of a new project. This projects contain some very basic examples of use of Spark's APIs.

Language Project URL
Scala https://gitub.u-bordeaux.fr/flalanne/spark_scala_project
Java https://gitub.u-bordeaux.fr/flalanne/spark_java_project
Python https://gitub.u-bordeaux.fr/flalanne/spark_python_project