Start HDFS High Availability Cluster

In the previous blog, we discussed about the HDFS high availability configuration. This blog describes the steps to start an HDFS high availability cluster. Pre-requisites Before starting with HDFS high availability cluster, make sure that the cluster meets the following pre-requisites: a)  If you have enabled Automatic Failover for Hot-BackUp during NameNode failover,  then before starting with HDFS high availability cluster, […]

HDFS High Availability Configuration

In the previous blog, we discussed about the HDFS High availability architecture. This blog describes the configurations for HDFS high availability in a Hadoop cluster. Pre-requisites Before configuring HDFS high availability, make sure that your Hadoop cluster has the following pre-requisites: a) You must have at-least two nodes to enable HDFS high availability. b) If you want to configure […]

HDFS High Availability Architecture

In the previous blog, we discussed about the need and design goals of HDFS High Availability. In this blog, we will talk about the architecture of HDFS high availability. HDFS High Availability Architecture In order to provide a HOT back-up and consistent solution for NameNode failure, a concept of using two […]

HDFS High Availability Overview

Background Single Point of Failure (SPOF) in HDFS: Each cluster had a single NameNode, and if that machine or process became unavailable, the cluster as a whole would be unavailable until the NameNode was either restarted or brought up on a separate machine. Ecosystem Dependency: The Hadoop ecosystem components like […]