Benchmark the cluster (I/O, CPU, network)

Benchmarking is the process of stress testing the resources of the cluster. It’s very useful in understanding the performance of your cluster and to check whether it’s performing as expected before taking it live.

Here we are going to test speed in which files are being read/write in HDFS, time taken for mappers/reducers to process a given size of data, measure the performance etc.,

We can easily benchmark the cluster by running the test jars bundled with Cloudera distribution.

Those are available in the hadoop installation directory.



When you run the jar file without any arguments, it will show you the list of programs available.





Below are the four primary jars used often to benchmark and let’s take a look at each of them.

  • Teragen
  • Terasort
  • Teravalidate
  • TestDFSIO

The first three benchmarks are commonly used together to evaluate the cluster performance. They are collectively called Terasort Benchmark Suite.

Terasort Benchmark Suite:

The TeraSort benchmark is the widely known Hadoop benchmark. It combines testing the HDFS and MapReduce layers of a Hadoop cluster and consists of three MapReduce programs.

There are three steps involved in Terasort benchmarking suite:

1. Generating the input data via TeraGen.

2. Running the actual TeraSort on the input data.

3. Validating the sorted output data via TeraValidate.


This program is available in hadoop-examples.jar (you can also use hadoop-*examples*.jar).

When ran with no arguments, it’s asking to specify number of rows and output dir.


hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-examples.jar teragen

teragen <num rows> <output dir>

Each row is of size 100 byte, so to generate 1GB of data, num of rows value is 10000000 and the output will be stored in hdfs dir /hadoop/teragen

To change the block size of the generated data, you can pass the argument “–D dfs.block.size=sizeinbytes “


I’m generating 1GB of data and storing it in /hadoop/teragen hdfs directory.

#hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-examples.jar teragen 10000000 /hadoop/teragen



The data generated from Teragen can be used as the input data for Terasort.


hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-examples.jar terasort inputdir outputdir

[[email protected] ~]# sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-examples.jar terasort /hadoop/teragen /hadoop/terasort






TeraValidate validates the sorted output of Terasort and to ensure that the keys are sorted within each file. If anything is wrong with the sorted output, the output of this reducer reports the problem.


hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-examples.jar teravalidate terasort_output teravalidate_output

[[email protected] ~]# sudo -u hdfs hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-examples.jar teravalidate /hadoop/terasort /hadoop/teravalidate






The TestDFSIO benchmark is a I/O test, i.e, read and write test for HDFS. It’s useful in identifying the read/write speed of your hdfs i.e all the datanodes disk and understand how fast the cluster is in terms of I/O.

When ran without arguments, it will show you the usage details.

Write Test:

Always first run the write test and then run the read test using the file generated in write test.

Here, I’m running a write test and creating 3 files with each of size of 100 MB, i.e total 300MB file is written in HDFS. TestDFSIO will write the files to /benchmarks/TestDFSIO on hdfs and the benchmark results are stored in local file, TestDFSIO_results.log.

[[email protected] ~]# hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-test-2.6.0-mr1-cdh5.12.1.jar TestDFSIO -write -nrFiles 3 -size 100MB

Read Test:

[[email protected] ~]# hadoop jar /opt/cloudera/parcels/CDH/jars/hadoop-test-2.6.0-mr1-cdh5.12.1.jar TestDFSIO -read -nrFiles 3 -size 100MB




Problem Scenarios:

  • Run the terasort benchmarking suite for 4TB of data and store the results in hdfs output dir.
  • Run the Testdfsio to test the hdfs performance and store the results in hdfs dir.
  • Generate a data of size 1TB with block size of 256MB using teragen
  • Troubleshoot the errors encountered in benchmarking jobs and run it successfully.



This is one of the important exam topics and you can definitely expect a question from benchmarking in the exam.

To know more about these benchmarking, check out this wonderful post, which explains these in detail.

Thus we covered how to Benchmark the cluster.

Use the comments section below to post your doubts, questions and feedback.

Please follow my blog to get notified of more certification related posts, exam tips, etc.



12 thoughts on “Benchmark the cluster (I/O, CPU, network)

  1. Hi Kannan,

    I am trying to run terasort mapreduce program from edge node, where I end up getting Java Heap space error
    WARN mapred.LocalJobRunner: job_local562076813_0001
    java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(
    Note: If i run the same program from my master node, this will get executed successfully.. I want to execute this from edge node. Could you please tell how to increase java heap size on run time as I dont want to messup the cluster by making changes in or some other places as it is working fine in master nodes?

    1. Hi,
      This error occurs when there is not sufficient java heap space available to run the job.
      It may be executing fine in the master if the master has higher configuration.
      By the way, to make changes in the run time, you can pass the below parameter while submitting the job.

      1. Hi,
        I ran command as below
        hadoop jar /opt/cloudera/parcels/CDH-5.7.1-1.cdh5.7.1.p0.11/lib/hadoop-0.20-mapreduce/hadoop-examples.jar terasort “” /tmp/abc/Teragen /tmp/abc/Terasort

        Still getting the same issue

        17/11/20 09:14:57 INFO mapreduce.Job: map 100% reduce 0%
        17/11/20 09:14:57 WARN mapred.LocalJobRunner: job_local699024141_0001
        java.lang.Exception: java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(
        at org.apache.hadoop.mapred.LocalJobRunner$
        Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.init(
        at org.apache.hadoop.mapred.MapTask.createSortingCollector(
        at org.apache.hadoop.mapred.MapTask.access$100(
        at org.apache.hadoop.mapred.MapTask$NewOutputCollector.(
        at org.apache.hadoop.mapred.MapTask.runNewMapper(
        at org.apache.hadoop.mapred.LocalJobRunner$Job$
        at java.util.concurrent.Executors$
        at java.util.concurrent.ThreadPoolExecutor.runWorker(
        at java.util.concurrent.ThreadPoolExecutor$
        17/11/20 09:14:58 INFO mapreduce.Job: Job job_local699024141_0001 failed with state FAILED due to: NA
        17/11/20 09:14:58 INFO mapreduce.Job: Counters: 22
        File System Counters
        FILE: Number of bytes read=838073496
        FILE: Number of bytes written=1676160320
        FILE: Number of read operations=0
        FILE: Number of large read operations=0
        FILE: Number of write operations=0
        HDFS: Number of bytes read=825306400
        HDFS: Number of bytes written=0
        HDFS: Number of read operations=44
        HDFS: Number of large read operations=0
        HDFS: Number of write operations=4
        Map-Reduce Framework
        Map input records=5368709
        Map output records=5368709
        Map output bytes=547608318
        Map output materialized bytes=558345748
        Input split bytes=246
        Combine input records=0
        Spilled Records=10737418
        Failed Shuffles=0
        Merged Map outputs=0
        GC time elapsed (ms)=209
        Total committed heap usage (bytes)=479199232
        File Input Format Counters
        Bytes Read=536870900
        17/11/20 09:14:58 INFO terasort.TeraSort: done

        1. May I know your cluster configuration?
          How many nodes and memory available for each node?
          Also how many memory, cores available in Resource Manager page?

          1. Hi,

            I am having 8 node cluster, each has 30 GB RAM except for edge node which has 15 GB RAM. All nodes have 200GB hard disk except for edge node which has 100 GB. Totally 24 Vcores & 52.68 GB Total Memory available.
            By the way now i am not getting this java heap size issue as I have added Yarn gate way host to edge node, so job is getting submitted there itself rather than running in local. Just wanted to confirm by adding this line “”
            Does it really have any impact when the cluster is managed by Cloudera?

            1. Cloudera Manager is nothing but the management service helps you to perform actions in UI. Whatever change you’re making in UI, will be done in the backend in the configuration file by CM.
              Passing the arguments in the command line “-D” won’t change any configuration files nor impact the CM.
              It’s a change being made in the runtime overriding whatever default values you set for the jobs. If you don’t pass the value, the job will take the default config.
              If you manually edit any conf files -coresite.xml, hdfs-site.xml, it will impact the cluster and the service will go to stale mode/client config deployment will be needed.
              Hope this helps.

  2. Hello Kannan
    Great preparation guid and examples , super useful 🙂
    I had my exam but unfortunately I could not make it , I got a benchmarking question and I could resolve the problem and run the tragen , TeraSort and teravalidate correctly and could get a successful runs and create all the output in correct path and I was so confident about this it marked as “Incorrect configuration” !!!
    In this problem I found that java heap for both map and reduce values were 0 I change it to 512M and I that’s how it was able to run and generate all the files … but the magically it was wrong..!!!!!!!!!!!!!!!
    could you please tell me what could be the reason??

    1. Hi,

      Thank you for your kind words. Are you sure this problem is marked incorrect even after you successfully run teragen, sort and teravalidate?
      If you’re able to fix the heap issue, successfully run all three jobs, then it’s absolutely right.

      One possible reason I could think of why it’s marked incorrect is, you may have mistyped the number of rows or data size for teragen, or used wrong output path for the paths.

  3. Hi All,

    if i want terasort with 20TB , what is the number of rows? I not understand “how to calculate the rows number?” Thanks in advance , regards Nicola

  4. Hello, thanks for the information you share. 🙂
    One question, in the case that you indicate “Run the Testdfsio to test the hdfs performance and store the results in hdfs dir”. Is it correct not to send you parameters (like -nrFiles and -size)?

Leave a Reply

Your email address will not be published. Required fields are marked *