Sunday, November 23, 2014
Hadoop Tips: Change default namenode and datanode directories
When we start Hadoop in psudo-distributed mode using the sbin/start-all.sh command for the first time, The default directory will be created in /tmp/ directory. The problem arises if you restart your machine the created directories will be deleted and you can't start your hadoop again.
To solve this problem you can change the default directory using configuration file that you can find in your hadoop's etc/hadoop/hdfs-site.xml. Add these configuration properties:
But please make sure, your namenode and datanode directories exist. And don't forget to set the properties value using correct URI (started with file://) format just like in the example. After that you can format your namenode using this command:
$ bin/hdfs namenode -format
and start your hadoop again using:
If the namenonde or datanode still not working, you can check log files to see the problem.
Hope this tips could help you. If you find other problem related to this, please leave a comment below. Cheers! :)
If you are tired of being asked for a password when accessing your remote droplet servers in Digitalocean, you might consider adding an rsa ...
This time, I've created a really simple text based game that allow the player to progress state to another state by choosing some option...
Long time no see. Installing CPNTools for Windows is easy, but today I have a problem when installing cpntools on a Windows Server 2008. Ac...
Hadoop Troubleshoot: Hadoop build error related to findbugs, Eclipse configuration, protobuf, and AvroRecordLast week, I was trying to build Hadoop 2.5.0 from source code. I tried several ways to build the source code, first one is using Maven in ...