google-site-verification: googleb67531d106507af5.html Data node failures. Hadoop Training in Hyderabad
7330907991 Online | 7330907992 Classroom

Failures of Data node


The most basic advantage of Hadoop is that it is a cost-effective system. Hence, it doesn’t require costly hardware for implementation; it can be effectively implemented on simple hardware. Such hardware components are technically referred to as commodity hardware. Another advantage with Hadoop is that it can support a large cluster of nodes and hence a Hadoop cluster can have hundreds and thousands of nodes.


Another feature of Hadoop that allows for storage of files that run up to hundreds and thousands of TB and PB is block storage. The files are divided into blocks of specific sizes and the blocks are stored on the various nodes. When the client wants to the read any particular file, the Name node provides the client with the information regarding where the various blocks are so that the client can directly read the file from the data nodes.


Though all the above features are obviously advantageous the one worrying issue is what happens in a case of data node failure. Commodity hardware is prone to failure and in case even one fails, reading the file becomes impossible. To take care of this problem of data node failures, Hadoop comes up with a very intelligent solution. When files are saved as blocks to the data nodes, copies are made of each block of data and saved on separate data nodes; the default is 3, but users can increase or decrease as per their requirement. So, even if one of the nodes fails, the Name node can easily call up the information from either of the other two nodes.


From the two copies made of the blocks, one is saved on another node but from the same rack as the original block and the other copy is made on another data node but from a different rack. This is done so that even if the entire rack fails, the Name node is still able to have access to the data blocks. This is how Hadoop takes care of data node failure and ensure data availability even in case of data node failure.


Another feature available with Hadoop that helps in taking care of data node failures is the heartbeat. This is the signal that every data node sends to the Name node every 3 seconds to indicate that it is well and working. In case, this heartbeat isn’t received by the Name node every 3 seconds, it considers that particular data node as not working and it transfers all the data blocks and other functions of that particular data node to another well and working data node. Hence, even though Hadoop runs on commodity hardware every possible precaution has been taken to ensure data availability even in case of hardware failure.