google-site-verification: googleb67531d106507af5.html Understanding block storage in HDFS. Hadoop Training
7330907991 Online | 7330907992 Classroom info@rcptec.com

Understanding block storage in HDFS

 

Here we are going to discuss HDFS storage. HDFS files are broken down into blocks in the underlying storage. By block, we mean the smallest unit that can be stored in a file system. In HDFS, the default block size is 64 MB and this is already configured in hdfs-default.xml which is in the conf. directory. However, HDFS allows users to configure their own block size; it can be either 64 MB or 128 MB and so on. This can be configured in hdfs-site.xml and the property is hdfs.blocksize; we can configure this property in hdfs-site.xml in the order set our choice block size for the files that are stored in HDFS.

 

One common question pertaining to block size is why should it be 64 MB only? Why is it not 4 KB as it is in the case of traditional filesystems? Consider a file, File 1, that is 100 MB in size, which is to be stored in HDFS. As in the case of traditional filesystems, if the block size is 4 KB, 25600 blocks are to be created in order to store File 1 (100 MB divided by 4 KB is equal to 25600). This can be done but in order to fetch this File 1 25600 requests should be made as in HDFS there will be one request for each block. 25600 requests means there is a lot of traffic involved; so, this is one of the reasons why the block size cannot be 4 KB as in the case of traditional filesystems.

 

The second reason is: the name node generally stores the metadata about the blocks that are stored in HDFS. The name node’s metadata actually contains the files that were actually created in HDFS, the directories created in HDFS, the files to block mapping and such information. It will store them as objects in the name node RAM. So in case of small block size, it will means more blocks and hence more information needs to be stored for which the RAM size of the name node needs to be large and there might be chances of issues with the RAM. This is another reason why the block size is 64 MB by default in HDFS.

 

So, if we take the example of File 1 that occupies 100 MB, to store it in HDFS, the file will be split into two blocks. The first block will be holding 64 MB and the second block will be holding 36 MB. So the whole 64 Mb will not be used; rather the second block will only be of the remaining size. So, this is how the blocks are created in HDFS. Let us now see how they are stored.

 

HDFS has a default replication factor of three; hence the first block is stored in three data nodes. Though the default is three, the replication factor can be changed as per user’s requirement; it can be both increased as well as decreased. Replication allows for failure tolerance; even if one node fails, data can be retrieved from either of the other two nodes. Similarly, the second block is also stored. Another feature available with HDFS is multiple files can have multiple block sizes and the files with different block sizes can be stored in the same cluster.