Sunday, November 23, 2014
Hadoop Tips: Change default namenode and datanode directories
When we start Hadoop in psudo-distributed mode using the sbin/start-all.sh command for the first time, The default directory will be created in /tmp/ directory. The problem arises if you restart your machine the created directories will be deleted and you can't start your hadoop again.
To solve this problem you can change the default directory using configuration file that you can find in your hadoop's etc/hadoop/hdfs-site.xml. Add these configuration properties:
But please make sure, your namenode and datanode directories exist. And don't forget to set the properties value using correct URI (started with file://) format just like in the example. After that you can format your namenode using this command:
$ bin/hdfs namenode -format
and start your hadoop again using:
If the namenonde or datanode still not working, you can check log files to see the problem.
Hope this tips could help you. If you find other problem related to this, please leave a comment below. Cheers! :)
This time, I've created a really simple text based game that allow the player to progress state to another state by choosing some option...
Hadoop Troubleshoot: Hadoop build error related to findbugs, Eclipse configuration, protobuf, and AvroRecordLast week, I was trying to build Hadoop 2.5.0 from source code. I tried several ways to build the source code, first one is using Maven in ...
Ubuntu Tips: Startup Application - Case study enabling touchpad edge scrolling and two fingers scrollingToday, I want to share a really handy feature in Ubuntu, Startup Application . Startup Application allows you to run additional script or pr...
After fresh install Ubuntu I found that sometimes my mouse cursor is missing and flickering so much. In order to solve this problem we need ...