BIGDL
BigDL is a library used to distribute AI application to a cluster of server. It leverages SPARK and is installed in /usr/bigdl
on the platform gateway.
It can be used either in scala on in python
SCALA
To launch a BigDL job written in scala, use either /usr/bigdl/bin/spark-submit-with-dllib.sh
to start a job on the cluster or /usr/bigdl/bin/spark-shell-with-dllib.sh
to start an interactive shell backed by the cluster.
The example script /usr/bigdl/examples/lsd/scala/example.sh
demonstrate how to start an example job for bigdl repository.
#deploy-mode cluster : to start the driver on the cluster instead of the gateway
# --executor-cores 1 : 1 core per worker
# --num-executors 1 : 1 worker
# --class com.intel.analytics.zoo.tutorial.SimpleMlp : main class to start
# /usr/bigdl/examples/lsd/scala/simplemlp-0.1.0-SNAPSHOT.jar : jar containing the main class to send to the workers
# -d 5 -h 20 -r 128 -e 1 -c 1 -n 1 -b 4 : arguments of the main function in the main class
/usr/bigdl/bin/spark-submit-with-dllib.sh \
--master yarn \
--deploy-mode cluster \
--executor-cores 1 \
--num-executors 1 \
--class com.intel.analytics.zoo.tutorial.SimpleMlp \
/usr/bigdl/examples/lsd/scala/simplemlp-0.1.0-SNAPSHOT.jar \
-d 5 -h 20 -r 128 -e 1 -c 1 -n 1 -b 4
PYTHON
Using BigDL in python requires a python environment containing BigDL's python dependencies. A minimal environment for BigDL is available as an archive in hdfs:///public/envs/bigdl-env.tar.gz
.
The script /usr/bigdl/examples/lsd/python/example.py
demonstrate how to start a BigDL job written in python :
#spark-submit
# --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=environment/bin/python : select the python interpretor used by the application Master as the python that is shipped with the environment send with --archives
# --conf spark.executorEnv.PYSPARK_PYTHON=environment/bin/python : select the python interpretor used by the workers as the python that is shipped with the environment send with --archives
# --jars /usr/bigdl/assembly/bigdl-assembly-spark_3.1.3-2.2.0-jar-with-dependencies.jar : add the jar containing bigdl and its dependencies to worker classpath
# --master yarn : start the job on the cluster
# --deploy-mode cluster : start the master on the cluster
# --executor-memory 10g : set the worker memory size
# --driver-memory 10g : set the master memory size
# --executor-cores 1 : set the number of cores in each worker
# --num-executors 1 : set the number of workers
# --archives hdfs:///public/envs/bigdl-env.tar.gz#environment : send the python environment containing bigdl and instruct spark to extract it in a directory named 'environment'
# /usr/bigdl/examples/lsd/python/example.py : python file containing our code
spark-submit \
--conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=environment/bin/python \
--conf spark.executorEnv.PYSPARK_PYTHON=environment/bin/python \
--jars /usr/bigdl/assembly/bigdl-assembly-spark_3.1.3-2.2.0-jar-with-dependencies.jar \
--master yarn \
--deploy-mode cluster \
--executor-memory 10g \
--driver-memory 10g \
--executor-cores 1 \
--num-executors 1 \
--archives hdfs:///public/envs/bigdl-env.tar.gz#environment \
/usr/bigdl/examples/lsd/python/example.py
If you need additional python dependencies you can use Conda to clone this minimal bigdl environment in /mnt/shared/public/bigdl_python_env
into a new environment, add your dependencies and use conda pack
to produce a new tar.gz
archive to use in place of hdfs:///public/envs/bigdl-env.tar.gz
#create a directory in staging mount point if missing
$> mkdir -p /mnt/staging/$USER/
#create a new environment by cloning the minimal bigdl environment
$>conda create -p /mnt/staging/$USER/bigdl_env/ --clone /mnt/shared/public/bigdl_python_env
#activate the newly created environment
$>conda activate /mnt/staging/$USER/bigdl_env/
#install new package using conda install or pip install
$>conda install numpy
$>pip install numpy
#create a new archive with the updated environment
$>conda pack -p /mnt/staging/$USER/bigdl_env/ -o /mnt/staging/$USER/bigdl_env.tar.gz