Spark provides two main utilities :
- the first (
spark-submit) is used to submit an application to run on the cluster
- the second is use to get an interactive shell in either scala (using
spark-shell) or python (using
pyspark). It cand be used for one-off task or for prototyping.
Both utilities support the same command-line options.
|spark-submit --master yarn --deploy-mode cluster
||Submits a Spark application on the cluster|
|--deploy-mode||when --master is
|--class||Name of the main class of the application|
|--jars||comma-separated list of additional local jar files to ship with the application jar|
|--py-files||comma-separated list of additional local python files|
|--driver-memory||Memory for the driver container|
|--driver-cores||Number of cores used by the driver|
|--executor-memory||Memory for executor containers|
|--executor-cores||Number of cores used by each executor container|
|--num-executors||Number of executor containers to start|
||set a spark configuration for the application|
Memory usage is limited by YARN (Hadoop's resource management system) with regards to the amount of resources requested. When using spark, the amount of memory requested is controlled by the option
--driver-memory for the driver (controller) and
--executor-memory for the executors(computation). This memory includes cached RDDs(datasets), memory used to execute your (Java or Scala ) code as well as Spark internal fonctions.
The configuration options
spark.executor.memoryOverhead are added to the amount specified with the previous options. When using java(or scala), these overhead account for native VM overhead, interned strings etc.
When using python, these overhead also include memory used by spawned python processes that execute your code. With python, the memory specified with the options
--executor-memory is used by spark to store persisted RDDs, for shuflling data...
As a consequence, when using python, be sure to set a large enough overhead (using for example
--conf spark.driver.memoryOverhead=1G --conf spark.executor.memoryOverhead=1G) for your python tasks to avoid a cancellation of your tasks by YARN because of resource overuse. This cancellation results in erros in Spark such as
Container killed by YARN for exceeding memory limits. 2.0 GB of 2 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead
Switching to SPARK3
Apache Spark version 3.1.1 is also available on the platform. To use this version instead of the default 2.3.1 version, you can modify your shell environment using the following bash command :
The following output should validate that version 3 will be use on subsequent spark command use.
Using Spark from /usr/spark3/bin + spark-submit --version Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 3.1.1 /_/ Using Scala version 2.12.10, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_112 Branch HEAD Compiled by user ubuntu on 2021-02-22T01:04:02Z Revision 1d550c4e90275ab418b9161925049239227f3dc9 Url https://github.com/apache/spark Type --help for more information.
To change bash to spark 2.3.1, use the command
Spark 3.0.0 uses scala 2.12 instead of version 2.11 for spark 2.3.1. In addition, spark 3.0.0 deprecates python2 and python3 prior to version 3.6.
Basic project example are available on gitub.u-bordeaux.fr to ease the creation of a new project. This projects contain some very basic examples of use of Spark's APIs.