Cloudera Enterprise 5.15.x | Other versions

Administering an HDFS High Availability Cluster

Manually Failing Over to the Standby NameNode

Manually Failing Over to the Standby NameNode Using Cloudera Manager

If you are running a HDFS service with HA enabled, you can manually cause the active NameNode to failover to the standby NameNode. This is useful for planned downtime—for hardware changes, configuration changes, or software upgrades of your primary host.

  1. Go to the HDFS service.
  2. Click the Instances tab.
  3. Click Federation and High Availability.
  4. Locate the row for the Nameservice where you want to fail over the NameNode. (Multiple rows display only when using HDFS Federation.)
  5. Select Actions > Manual Failover. (This option does not appear if HA is not enabled for the cluster.)
  6. From the pop-up, select the NameNode that should be made active, then click Manual Failover.
      Note: For advanced use only: You can set the Force Failover checkbox to force the selected NameNode to be active, irrespective of its state or the other NameNode's state. Forcing a failover will first attempt to failover the selected NameNode to active mode and the other NameNode to standby mode. It will do so even if the selected NameNode is in safe mode. If this fails, it will proceed to transition the selected NameNode to active mode. To avoid having two NameNodes be active, use this only if the other NameNode is either definitely stopped, or can be transitioned to standby mode by the first failover step.
  7. When all the steps have been completed, click Finish.

Cloudera Manager transitions the NameNode you selected to be the active NameNode, and the other NameNode to be the standby NameNode. HDFS should never have two active NameNodes.

Manually Failing Over to the Standby NameNode Using the Command Line

To initiate a failover between two NameNodes, run the command hdfs haadmin -failover.

This command causes a failover from the first provided NameNode to the second. If the first NameNode is in the Standby state, this command simply transitions the second to the Active state without error. If the first NameNode is in the Active state, an attempt will be made to gracefully transition it to the Standby state. If this fails, the fencing methods (as configured by dfs.ha.fencing.methods) will be attempted in order until one of the methods succeeds. Only after this process will the second NameNode be transitioned to the Active state. If no fencing method succeeds, the second NameNode will not be transitioned to the Active state, and an error will be returned.
  Note: Running hdfs haadmin -failover from the command line works whether you have configured HA from the command line or using Cloudera Manager. This means you can initiate a failover manually even if Cloudera Manager is unavailable.

Moving an HA NameNode to a New Host

Moving an HA NameNode to a New Host Using the Command Line

Use the following steps to move one of the NameNodes to a new host.

In this example, the current NameNodes are called nn1 and nn2, and the new NameNode is nn2-alt. The example assumes that nn2-alt is already a member of this CDH 5 HA cluster, that automatic failover is configured and that a JournalNode on nn2 is to be moved to nn2-alt, in addition to NameNode service itself.

The procedure moves the NameNode and JournalNode services from nn2 to nn2-alt, reconfigures nn1 to recognize the new location of the JournalNode, and restarts nn1 and nn2-alt in the new HA configuration.

Step 1: Make sure that nn1 is the active NameNode

Make sure that the NameNode that is not going to be moved is active; in this example, nn1 must be active. You can use the NameNodes' web UIs to see which is active; see Start the NameNodes.

If nn1 is not the active NameNode, use the hdfs haadmin -failover command to initiate a failover from nn2 to nn1:
hdfs haadmin -failover nn2 nn1

Step 2: Stop services on nn2

Once you've made sure that the node to be moved is inactive, stop services on that node: in this example, stop services on nn2. Stop the NameNode, the ZKFC daemon if this an automatic-failover deployment, and the JournalNode if you are moving it. Proceed as follows.
  1. Stop the NameNode daemon:
    $ sudo service hadoop-hdfs-namenode stop
  2. Stop the ZKFC daemon if it is running:
    $ sudo service hadoop-hdfs-zkfc stop
  3. Stop the JournalNode daemon if it is running:
    $ sudo service hadoop-hdfs-journalnode stop 
  4. Make sure these services are not set to restart on boot. If you are not planning to use nn2 as a NameNode again, you may want remove the services.

Step 3: Install the NameNode daemon on nn2-alt

See the instructions for installing hadoop-hdfs-namenode under Step 3: Install CDH 5 with YARN or Step 4: Install CDH 5 with MRv1.

Step 4: Configure HA on nn2-alt

See Enabling HDFS HA for the properties to configure on nn2-alt in core-site.xml and hdfs-site.xml , and explanations and instructions. You should copy the values that are already set in the corresponding files on nn2.
  • If you are relocating a JournalNode to nn2-alt, follow these directions to install it, but do not start it yet.
  • If you are using automatic failover, make sure you follow the instructions for configuring the necessary properties on nn2-alt and initializing the HA state in ZooKeeper.
      Note: You do not need to shut down the cluster to do this if automatic failover is already configured as your failover method; shutdown is required only if you are switching from manual to automatic failover.

Step 5: Copy the contents of the dfs.name.dir and dfs.journalnode.edits.dir directories to nn2-alt

Use rsync or a similar tool to copy the contents of the dfs.name.dir directory, and the dfs.journalnode.edits.dir directory if you are moving the JournalNode, from nn2 to nn2-alt.

Step 6: If you are moving a JournalNode, update dfs.namenode.shared.edits.dir on nn1

If you are relocating a JournalNode from nn2 to nn2-alt, update dfs.namenode.shared.edits.dir in hdfs-site.xml on nn1 to reflect the new hostname. See this section for more information about dfs.namenode.shared.edits.dir.

Step 7: If you are using automatic failover, install the zkfc daemon on nn2-alt

For instructions, see Deploy Automatic Failover (if it is configured), but do not start the daemon yet.

Step 8: Start services on nn2-alt

Start the NameNode; start the ZKFC for automatic failover; and install and start a JournalNode if you want one to run on nn2-alt. Proceed as follows.

  1. Start the JournalNode daemon:
    $ sudo service hadoop-hdfs-journalnode start 
  2. Start the NameNode daemon:
    $ sudo service hadoop-hdfs-namenode start
  3. Start the ZKFC daemon:
    $ sudo service hadoop-hdfs-zkfc start
  4. Set these services to restart on boot; for example on a RHEL-compatible system:
    $ sudo chkconfig hadoop-hdfs-namenode on
    $ sudo chkconfig hadoop-hdfs-zkfc on
    $ sudo chkconfig hadoop-hdfs-journalnode on

Step 9: If you are relocating a JournalNode, fail over to nn2-alt

hdfs haadmin -failover nn1 nn2-alt

Step 10: If you are relocating a JournalNode, restart nn1

Restart the NameNode daemon on nn1 to force it to re-read the configuration:
$ sudo service hadoop-hdfs-namenode stop 
$ sudo service hadoop-hdfs-namenode start

Other HDFS haadmin Commands

After your HA NameNodes are configured and started, you will have access to some additional commands to administer your HA HDFS cluster. Specifically, you should familiarize yourself with the subcommands of the hdfs haadmin command.

This page describes high-level uses of some important subcommands. For specific usage information of each subcommand, you should run hdfs haadmin -help <command>.

getServiceState

getServiceState - determine whether the given NameNode is active or standby

Connect to the provided NameNode to determine its current state, printing either "standby" or "active" to STDOUT as appropriate. This subcommand might be used by cron jobs or monitoring scripts which need to behave differently based on whether the NameNode is currently active or standby.

checkHealth

checkHealth - check the health of the given NameNode

Connect to the provided NameNode to check its health. The NameNode is capable of performing some diagnostics on itself, including checking if internal services are running as expected. This command will return 0 if the NameNode is healthy, non-zero otherwise. One might use this command for monitoring purposes.

Using the dfsadmin Command When HA is Enabled

By default, applicable dfsadmin command options are run against both active and standby NameNodes. To limit an option to a specific NameNode, use the -fs option. For example,

To turn safe mode on for both NameNodes, run:

hdfs dfsadmin -safemode enter

To turn safe mode on for a single NameNode, run:

hdfs dfsadmin -fs hdfs://<host>:<port> -safemode enter

For a full list of dfsadmin command options, run: hdfs dfsadmin -help.

Converting From an NFS-mounted Shared Edits Directory to Quorum-based Storage

Converting From an NFS-mounted Shared Edits Directory to Quorum-based Storage Using Cloudera Manager

Converting a HA configuration from using an NFS-mounted shared edits directory to Quorum-based storage involves disabling the current HA configuration then enabling HA using Quorum-based storage.

  1. Disable HA.
  2. Although the standby NameNode role is removed, its name directories are not deleted. Empty these directories.
  3. Enable HA with Quorum-based storage.

Converting From an NFS-mounted Shared Edits Directory to Quorum-based Storage Using the Command Line

To switch from shared storage using NFS to Quorum-based storage, proceed as follows:
  1. Disable HA.
  2. Redeploy HA using Quorum-based storage.
Page generated May 18, 2018.