Incorrect configuration: namenode address dfs.namenode.rpc-address is not configured
Solution 1
I too was facing the same issue and finally found that there was a space in fs.default.name value. truncating the space fixed the issue. The above core-site.xml doesn't seem to have space so the issue may be different from what i had. my 2 cents
Solution 2
These steps solved the problem for me:
export HADOOP_CONF_DIR = $HADOOP_HOME/etc/hadoop
echo $HADOOP_CONF_DIR
hdfs namenode -format
hdfs getconf -namenodes
./start-dfs.sh
Solution 3
check the core-site.xml under $HADOOP_INSTALL/etc/hadoop dir. Verify that the property fs.default.name is configured correctly
Solution 4
Obviously,your core-site.xml has configure error.
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:8020</value>
</property>
Your <name>fs.defaultFS</name>
setting as <value>hdfs://namenode:8020</value>
,but your machine hostname is datanode1
.So you just need change namenode
to datanode1
will be OK.
umbreonben
Updated on July 09, 2022Comments
-
umbreonben almost 2 years
I am getting this error when I try and boot up a DataNode. From what I have read, the RPC paramters are only used for a HA configuration, which I am not setting up (I think).
2014-05-18 18:05:00,589 INFO [main] impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(572)) - DataNode metrics system shutdown complete. 2014-05-18 18:05:00,589 INFO [main] datanode.DataNode (DataNode.java:shutdown(1313)) - Shutdown complete. 2014-05-18 18:05:00,614 FATAL [main] datanode.DataNode (DataNode.java:secureMain(1989)) - Exception in secureMain java.io.IOException: Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured. at org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddresses(DFSUtil.java:840) at org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:151) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:745) at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:278)
My files look like:
[root@datanode1 conf.cluster]# cat core-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://namenode:8020</value> </property> </configuration>
cat hdfs-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <configuration> <property> <name>dfs.datanode.data.dir</name> <value>/hdfs/data</value> </property> <property> <name>dfs.permissions.superusergroup</name> <value>hadoop</value> </property> </configuration>
I am using the latest CDH5 distro.
Installed Packages Name : hadoop-hdfs-datanode Arch : x86_64 Version : 2.3.0+cdh5.0.1+567 Release : 1.cdh5.0.1.p0.46.el6
Any helpful advice on how to get past this?
EDIT: Just use Cloudera manager.