HDFS
HDFS is the platform file system. There are essentially two ways of accessing this file system :
- using HDFS command line
- through other services like MapReduce or Spark
- with a NFS mount point
A typical workflow would see users upload their data on HDFS using the command line and then process it using Mapreduce or Spark, possibly creating additional data stored on HDFS.
HDFS command line
The command used to interact with the cluster is hdfs dfs
. This command has several sub-commands depending on which operations to do.
For example :
Command | Operation |
---|---|
hdfs dfs -ls <path> |
Lists the files and directories located in <path> |
hdfs dfs -copyFromLocal <localsrc> ... <dst> |
Uploads one or multiple local files/directories in dst hdfs directory |
hdfs dfs -copyToLocal <src> ... <localdst> |
Downloads one or multiple distributed files/directories to the local file system. Beware that the local filesystem might not have the storage capacity to hold the distributed data |
hdfs dfs -rm <path> |
Delete file at <path> |
hdfs dfs -mkdir <path> |
Create a directory <path> |
hdfs dfs -chown <user>:<group> <path> |
Change the ownership of a file or directory |
hdfs dfs -chmod <mode> <path> |
Change the access rights to a file or directory |
hdfs dfs -help <command> |
Prints the reference for operation <command> |
The full list of commands and their options can be printed by typing hdfs dfs -help
. To get additional detail on a command, use hdfs dfs -help <command>
.
HDFS NFS mount point
The distributed filesystem can be accessed via an nfs mount point on the gateway in /mnt/hdfs_nfs
.
You can browse and interact with HDFS through this mount point with classic unix commands such as ls
or mv
.
However, all read or written data go through a single NFS server, inducing slow read/write in case of large volume of data. In those cases, prefer using command -copyFromLocal
and -copyToLocal
or application such as MapReduce or Spark to interact with large volume of data.