Hadoop

Why Map outputs are stored in local FS and not in HDFS?

Map outputs are temporary intermediate data which doesn’t purpose to the user running the job. It is used by the reducer to combine, sort, shuffle and produce the final output.

It’s not recommended to store it in hdfs as the data will be replicated across the cluster, the namenode has to update its metadata, etc. Also if the job is terminated due to some reason or failure, then the intermediate data will remain in hdfs and we have to clean them up manually.

Due to this limitations, the map outputs are stored in local filesystem of the respective nodemanagers. Initially the data will be kept in buffer as per the size specified in “mapreduce.task.io.sort.mb” property. Once the buffer reaches the limit of “io.sort.spill.percent”, the data will be pushed to the disk.

When the contents of the buffer reach a certain threshold size (mapreduce.map.sort.spill.percent), which has the default value 0.80, or 80%), a background thread will start to spill the contents to disk.

You can change the map intermediate output storage location by modifying the property, “mapred.local.dir” in mapred-site.xml.

Also it’s recommended to compress the map output written to disk, as it’ll make it faster to write to disk, saves disk space and reduces the amount to data to be transferred to the reducer.

The only scenario where writing the map output to hdfs sounds reasonable is when your job has only mapper functions and no reducers.

One thought on “Why Map outputs are stored in local FS and not in HDFS?

Leave a Reply

Your email address will not be published. Required fields are marked *