Friday, June 26, 2015

Hadoop fs Commands


                                             Important hadoop fs commands

To  display the usage statement for the hadoop command run
$ hadoop -help


1.     Display the usage statement for the hadoop fs command
    $ hadoop fs


2.     List the contents of your home directory
    $ hadoop fs –ls ~


3.     Display a recursive listing of the /mapr directory
    $ hadoop fs –lsr /mapr


4.     Display the contents of the /tmp directory on hadoop cluster
    $ hadoop fs –ls /tmp


5.    Display the disk-usage of the /user/root directory
    $ hadoop fs -du /user/root


6.     Creates "test" file under "/user/root" directory on hadoop cluster
    $hadoop fs -touchz /user/root/test


7.     Make a sub-directory in your home directory.
    $ hadoop fs –mkdir ~/mydir


8.     Copy local /etc/hosts file to your hadoop cluster
    $ hadoop fs –copyFromLocal /etc/hosts ~/mydir


9.     Display the contents of the hosts file in directory u created.
  $ hadoop fs –ls ~/mydir


10. Change the permission of the hosts file .
   $ hadoop fs -chmod 777 ~/mydir/hosts


11. Move the hosts file from hadoop to local /tmp
    $ hadoop fs –moveToLocal ~/mydir/hosts /tmp/


12. Copy the hosts file from  hadoop to local /tmp
  $ hadoop fs –copyToLocal ~/mydir/hosts /tmp/


13. Display the contents of the hosts file in /tmp
     $ hadoop fs –cat /tmp/hosts


14. Populate your subdirectory with files and time the command    

    $ time hadoop fs -copyFromLocal /usr/bin/* ~/mydir

15Remove all the files in your subdirectory    

    $ hadoop fs -rm ~/mydir/*



1 comment: