"hadoop namenode -format" returns a java.net.UnknownHostException

21,744

Solution 1

UnknownHostException is thrown when hadoop tries to resolve the DNS name (srv-clc-04.univ-nantes.prive3) to an ip address. This fails.

Look for the domain name in the configuration files and replace it by "localhost". (Or update the DNS up resolve the name to an ip address)

Solution 2

First get the host name of your computer. It can be obtained by running $hostname command. Then add 127.0.0.1 localhost hostname into the /etc/hosts file. That should solve the problem.

Share:
21,744
LabRat01010
Author by

LabRat01010

Bioinformatician Virology Genetics Biology Science Science2.0 Web2.0 Bioinformatics Genotyping Wikipedia

Updated on March 02, 2020

Comments

  • LabRat01010
    LabRat01010 about 4 years

    I'm currently learning hadoop and I'm trying to setup a single node test as defined in http://hadoop.apache.org/common/docs/current/single_node_setup.html

    I've configured ssh (I can log without a password).

    My server is on our intranet, behind a proxy.

    When I'm trying to run

    bin/hadoop namenode -format

    I get the following java.net.UnknownHostException exception:

    $ bin/hadoop namenode -format
    11/06/10 15:36:47 INFO namenode.NameNode: STARTUP_MSG: 
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG:   host = java.net.UnknownHostException: srv-clc-04.univ-nantes.prive3: srv-clc-04.univ-nantes.prive3
    STARTUP_MSG:   args = [-format]
    STARTUP_MSG:   version = 0.20.203.0
    STARTUP_MSG:   build = http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203 -r 1099333; compiled by 'oom' on Wed May  4 07:57:50 PDT 2011
    ************************************************************/
    Re-format filesystem in /home/lindenb/tmp/HADOOP/dfs/name ? (Y or N) Y
    11/06/10 15:36:50 INFO util.GSet: VM type       = 64-bit
    11/06/10 15:36:50 INFO util.GSet: 2% max memory = 19.1675 MB
    11/06/10 15:36:50 INFO util.GSet: capacity      = 2^21 = 2097152 entries
    11/06/10 15:36:50 INFO util.GSet: recommended=2097152, actual=2097152
    11/06/10 15:36:50 INFO namenode.FSNamesystem: fsOwner=lindenb
    11/06/10 15:36:50 INFO namenode.FSNamesystem: supergroup=supergroup
    11/06/10 15:36:50 INFO namenode.FSNamesystem: isPermissionEnabled=true
    11/06/10 15:36:50 INFO namenode.FSNamesystem: dfs.block.invalidate.limit=100
    11/06/10 15:36:50 INFO namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
    11/06/10 15:36:50 INFO namenode.NameNode: Caching file names occuring more than 10 times 
    11/06/10 15:36:50 INFO common.Storage: Image file of size 113 saved in 0 seconds.
    11/06/10 15:36:50 INFO common.Storage: Storage directory /home/lindenb/tmp/HADOOP/dfs/name has been successfully formatted.
    11/06/10 15:36:50 INFO namenode.NameNode: SHUTDOWN_MSG: 
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at java.net.UnknownHostException: srv-clc-04.univ-nantes.prive3: srv-clc-04.univ-nantes.prive3
    ************************************************************/
    

    After that, hadoop was started

    ./bin/start-all.sh
    

    but there was another new exception when I tried to copy a local file:

     bin/hadoop fs  -copyFromLocal ~/file.txt  file.txt
    
    DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/lindenb/file.txt could only be replicated to 0 nodes, instead of 1
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1417)
    

    how can I fix this problem please ?

    Thanks

  • LabRat01010
    LabRat01010 almost 13 years
    Thanks, I added an alias in /etc/hosts
  • Rak
    Rak over 8 years
    Thanks, I too had the same issu updating /etc/hosts file helped