ssh: Could not resolve hostname. Name or service not known
/etc/hosts
kind of act like a local DNS, when you don't have real DNS associated with an IP address.
Do you really need a {public dns of 1st instance} node1
mapping if you can use {public dns of 1st instance} directly in the slave and master files?
Moreover, it's better to use the private IP addresses of amazon instances instead of using the public IP addresses. You can do a ifconfig
in the terminal of each instances and determine their private IP addresses if any. They will probably basically will start with 10.x.x.x/172.x.x.x/192.x.x.x? You can probably then map those instead in /etc/hosts in each of the amazon instances.
So, your /etc/hosts in each machine should look something like -
Machine-1:
{IP_address_1st_instance} node1
{IP_address_2st_instance} node2
Machine-2:
{IP_address_1st_instance} node1
{IP_address_2st_instance} node2
And, this is so that the Amazon instances(machines) can resolve each other, if you are anyhow planning to map them.
Amre
Updated on July 09, 2022Comments
-
Amre almost 2 years
I'm trying to set up hadoop on my amazon instances, on a 2 node cluster. Each instance has a public dns, which I use reference them. So in the /etc/hosts files on both machines I append lines like this:
{public dns of 1st instance} node1 {public dns of 2st instance} node2
I'm also able to ssh into each instance from the other by simply doing:
ssh {public dns of the other instance}
In the the hadoop/conf/slaves on the first instance file I have:
localhost node2
When I start the script bin/start-dfs.sh It's able to start the namenode, datanode, and secondary namenode on the master, but it says:
node2: ssh: Could not resolve hostname node2: Name or service not known
The same it printed out if I try:
ssh node2
I guess the question is how do I tell it to associate node2 with the public dns of the second instance. Is it not enough to append the
{public dns of 2st instance} node2
line to the /etc/hosts file? Do I have to reboot the instances?
-
Amre over 10 yearsThere would no problem with using public dns, directly in the masters and slave files. As for using the private ip, I had just replaced the public dns with the corresponding private ip in the /etc/hosts file, before I saw your answer, but it gave me the same error.
-
Amre over 10 yearsbut I did use ip-10.x.x.x, rather than 10.x.x.x, could that have been the issue?
-
Amre over 10 yearsI can ssh using the private ip if i do ssh ip-10.x.x.x., but not ssh 10.x.x.x
-
SSaikia_JtheRocker over 10 years"but I did use ip-10.x.x.x, rather than 10.x.x.x" - I didn't get you?
-
Amre over 10 yearsThe aws console tells you the private ip but with "ip-" infront of the address
-
SSaikia_JtheRocker over 10 yearsCan you ping 10.x.x.x?
-
SSaikia_JtheRocker over 10 yearsDid you map the private ip address of both in each of the machines in /etc/hosts? i.e. your both '{IP_address_1st_instance} node1' and '{IP_address_2st_instance} node2' mapping should go in every machine's /etc/hosts
-
Amre over 10 yearsI can't ping "10.x.x.x", but I can ping "ip-10.x.x.x"
-
Amre over 10 yearsI was able to start everything when i replaced the "node1" and "node2" with the actual ips, but I'm having a hadoop problem now, it tells me the connection to localhost is refused, even though I can do a passphraseless ssh into localhost
-
SSaikia_JtheRocker over 10 yearsdo you have localhost mapped to 127.0.0.1 and nothing else in the /etc/hosts?
-
SSaikia_JtheRocker over 10 years
-
Amre over 10 yearsMy work network let me open the chat page, but the I just rmemebered, i had forgotten to staart mapreduce when I did the example. Thanks for your help, putting the ip worked.
-
SSaikia_JtheRocker over 10 yearsGlad that it woreked!