hadoop hdfs points to file:/// not hdfs://

11,028

By default, Hadoop is going to use local mode. You probably need to set fs.default.name to hdfs://localhost.localdomain:8020/ in $HADOOP_HOME/conf/core-site.xml.

To do this, you add this to core-site.xml:

 <property>
  <name>fs.default.name</name>
  <value>hdfs://localhost.localdomain:8020/</value>
</property>

The reason why Accumulo is confused is because it's using the same default configuration to figure out where HDFS is... and it's defaulting to file://

Share:
11,028
Admin
Author by

Admin

Updated on June 14, 2022

Comments

  • Admin
    Admin almost 2 years

    So I installed Hadoop via Cloudera Manager cdh3u5 on CentOS 5. When I run cmd

    hadoop fs -ls /

    I expected to see the contents of hdfs://localhost.localdomain:8020/

    However, it had returned the contents of file:///

    Now, this goes without saying that I can access my hdfs:// through

    hadoop fs -ls hdfs://localhost.localdomain:8020/

    But when it came to installing other applications such as Accumulo, accumulo would automatically detect Hadoop Filesystem in file:///

    Question is, has anyone ran into this issue and how did you resolve it?

    I had a look at HDFS thrift server returns content of local FS, not HDFS , which was a similar issue, but did not solve this issue. Also, I do not get this issue with Cloudera Manager cdh4.

  • Mauricio Morales
    Mauricio Morales over 10 years
    Apparently, the property should now be fs.defaultFS, not fs.default.name
  • anand
    anand about 7 years
    without formating namenode are we able to change fs.defaultFS ?