Sunday, July 6, 2014

Learning java selenium webdriver

Today, we are going to something different, we explore software test technology. As end user, we are mostly dealing with point and click, and with this article, we are going to learn how to test user interface.

There are many testing software out there, just google and you find plenty. In this article, we are going to use Java Selenium webdriver. To quickly start, you will have to complete these bullet points. For complete information of selenium, read here.

or additional optional download at http://docs.seleniumhq.org/download/



Let's see continue this article with two example.
import java.util.List;

import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.WebElement;
import org.openqa.selenium.firefox.FirefoxDriver;

public class GoogleSuggest {

public static void main(String[] args) throws Exception {
// The Firefox driver supports javascript
WebDriver driver = new FirefoxDriver();

// Go to the Google Suggest home page
driver.get("http://www.google.com/webhp?complete=1&hl=en");

// Enter the query string "Cheese"
WebElement query = driver.findElement(By.name("q"));
query.sendKeys("Cheese");

// Sleep until the div we want is visible or 5 seconds is over
long end = System.currentTimeMillis() + 5000;
while (System.currentTimeMillis() < end) {
WebElement resultsDiv = driver.findElement(By.className("gssb_e"));

// If results have been returned, the results are displayed in a drop down.
if (resultsDiv.isDisplayed()) {
break;
}
}

// And now list the suggestions
List<WebElement> allSuggestions = driver.findElements(By.xpath("//td[@class='gssb_a gbqfsf']"));

for (WebElement suggestion : allSuggestions) {
System.out.println("suggestion => " + suggestion.getText());
}

driver.quit();
}

}

import org.junit.Before;
import org.junit.Test;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;

import com.thoughtworks.selenium.SeleneseTestBase;
import com.thoughtworks.selenium.webdriven.WebDriverBackedSelenium;

public class BaseSeleniumTest extends SeleneseTestBase {

@Before
public void setUp() throws Exception {
WebDriver driver1 = new FirefoxDriver();
String baseUrl = "http://google.com/";
selenium = new WebDriverBackedSelenium(driver1, baseUrl);
}

@Test
public void testGoogle() throws Exception {
selenium.open("http://google.com/");
System.out.println(selenium.getTitle());
assertEquals("Google", selenium.getTitle());
selenium.close();
}
}

The first example construct a FireFox WebDriver. Load google url and mimic a typing by sending a keys of Cheese to the query. When Web Element resultsDiv is displayed, then all the suggestion get printed.

The second example is much easier to construct test. By extending SeleneseTestBase, you can use the object selenium and start to call the rich methods it provides for your sub test class. As seem here, similar to the first example, the test started with construction of FirefoxDriver. Then the very familiar junit test, open up google link, assert that the title is Google and then close the selenium object.

That's it for this article, I hope it help you get started. Remember to donate us if you would like to contribute back.

Saturday, July 5, 2014

Study MongoDB GridFS with java example

In the past, we have learned basic MongoDB and study data model, in this article , we will study MongoDB GridFS by storing a file into MongoDB. Below is the java simple application to show how to store, retrieve, and delete eventually.
import java.io.File;
import java.io.IOException;

import com.mongodb.DB;
import com.mongodb.DBCursor;
import com.mongodb.Mongo;
import com.mongodb.MongoException;
import com.mongodb.gridfs.GridFS;
import com.mongodb.gridfs.GridFSDBFile;
import com.mongodb.gridfs.GridFSInputFile;

public class LearnMongo {

public static void main(String[] args) throws MongoException, IOException {
Mongo mongo = new Mongo("192.168.0.2", 27017);
DB db = mongo.getDB("mp3db");

// save image
String newFilename = "django.mp3";
File mp3File = new File("src/resources/django.mp3");
GridFS gfsMp3 = new GridFS(db, "mp3");
GridFSInputFile gfsFile = gfsMp3.createFile(mp3File);
gfsFile.setFilename(newFilename);
gfsFile.setContentType("audio/mpeg");
System.out.println(gfsFile.toString());
gfsFile.save();

// get mp3
GridFSDBFile imageForOutput = gfsMp3.findOne(newFilename);
System.out.println(imageForOutput);

// print image
DBCursor cursor = gfsMp3.getFileList();
while (cursor.hasNext()) {
System.out.println(cursor.next());
}

// save into another image
imageForOutput.writeTo("/home/jason/Desktop/newsong.mp3");

// delete image
gfsMp3.remove(gfsMp3.findOne(newFilename));

}

}

We start by connecting to the server, so with this example MongoDB instance running on server 192.168.0.2 on port 27017. You may want to check the configuration for MongoDB if you connect remotely as the default MongoDB configuration only listen to localhost.

Then we form a MongoDB DB object on mp3db. You can store other object as well but for this example, I'm going to store a mp3. With this ready, we are going to store the mp3. The important piece of code is probably below.
GridFS gfsMp3 = new GridFS(db, "mp3");
GridFSInputFile gfsFile = gfsMp3.createFile(mp3File);

Instantiate two object, GrisFS and GridFSInputFile. You can set additional information like filename, content type. Calling GridFSInputFile.save() will save the object into MongoDB. If you have access to MongoDB cli, command such as > db.mp3.files.find(); will shown below the output.
{ "_id" : ObjectId("53ad60f944aeaca83109d253"), "chunkSize" : NumberLong(262144), "length" : NumberLong(316773), "md5" : "7293e9fd795e2bb6d5035e5b69cb2923", "filename" : "django.mp3", "contentType" : "audio/mpeg", "uploadDate" : ISODate("2014-06-27T12:18:01.934Z"), "aliases" : null }

To find the mp3, you can use the code, GridFSDBFile imageForOutput = gfsMp3.findOne(newFilename); below is the output.
{ "_id" : { "$oid" : "53ad60f944aeaca83109d253"} , "chunkSize" : 262144 , "length" : 316773 , "md5" : "7293e9fd795e2bb6d5035e5b69cb2923" , "filename" : "django.mp3" , "contentType" : "audio/mpeg" , "uploadDate" : { "$date" : "2014-06-27T12:18:01Z"} , "aliases" : null }

You can also use GridFS.getFileList(); to retrieve all the files currently store on this database. The code continue on writing the object into a file. As you can see, I'm writing to desktop just to ensure it is not from the source.

I end this article by removing the object in the MongoDB database.

Friday, July 4, 2014

Study MongoDB data models

Today we are going to learn on MongoDB Data Models.

It is important to study data modal is because as a developer, you would want to leverage what MongoDB is excel at and aware what it is not suitable for.

You can basically store a few document and reference then using and id field.
But remember this need two round trip back and forth from the application servers
to the mongo database.

As such, in this scenario if it better to embed the document into a document.
user document
{
_id: <ObjectId1>, <-------------+
username : "jasonwee" |
} |
|
contact document |
{ |
_id: <ObjectId2>, |
user_id: <ObjectId1> <-------------+
phone: "012-3456789"
}

into
user document
{
_id: <ObjectId1>,
contact : {
phone: "012-3456789"
}
}

This modelling guarantee you atomicity of a document as mongodb write operations
are atomic at document level.

Indexes

Use indexes to improve performance for common queries. Build indexes on fields that appear often in queries and for all operations that return sorted results. MongoDB automatically creates a unique index on the _id field.

Each index requires at least 8KB of data space.

GridFS

GridFS is a specification for storing and retrieving files that exceed the BSON-document size limit of 16MB.

Model Relationships Between Documents

  • Model One-to-One Relationships with Embedded Documents
    Presents a data model that uses embedded documents to describe one-to-one relationships between connected data.

  • Model One-to-Many Relationships with Embedded Documents
    Presents a data model that uses embedded documents to describe one-to-many relationships between connected data.

  • Model One-to-Many Relationships with Document References
    Presents a data model that uses references to describe one-to-many relationships between documents.


Model Tree Structures

MongoDB allows various ways to use tree data structures to model large hierarchical or nested data relationships.

  • Model Tree Structures with Parent References 
    Presents a data model that organizes documents in a tree-like structure by storing references to “parent” nodes in “child” nodes.

  • Model Tree Structures with Child References
    Presents a data model that organizes documents in a tree-like structure by storing references to “child” nodes in “parent” nodes.

  • Model Tree Structures with an Array of Ancestors
    Presents a data model that organizes documents in a tree-like structure by storing references to “parent” nodes and an array that stores all ancestors.

  • Model Tree Structures with Materialized Paths
    Presents a data model that organizes documents in a tree-like structure by storing full relationship paths between documents. In addition to the tree node, each document stores the _id of the nodes ancestors or path as a string.

  • Model Tree Structures with Nested Sets
    Presents a data model that organizes documents in a tree-like structure using the Nested Sets pattern. This optimizes discovering subtrees at the expense of tree mutability.

Sunday, June 22, 2014

Learning basic MongoDB by installing and using CRUD

Today, we are going to learn MongoDB, including understand what is MongoDB, installation and doing CRUD operation. We start with the basic question.

what is MongoDB?

MongoDB (from "humongous") is a cross-platform document-oriented database. Classified as a NoSQL database, MongoDB eschews the traditional table-based relational database structure in favor of JSON-like documents with dynamic schemas (MongoDB calls the format BSON), making the integration of data in certain types of applications easier and faster.

With that said, let's move on to install MongoDB. There are many ways to install MongoDB but with this article, the one I'm chosen is to install MongoDB using deb package built by MongoDB. Even though MongoDB comes with ubuntu however the version in the repository is just too old. Current in the ubuntu repository, mongodb version is 1:2.4.9-1ubuntu2 and meanwhile official production release version is 2.6.1.

The instructions below are from http://docs.mongodb.org/manual/tutorial/install-mongodb-on-ubuntu/ . But I summarize into one liner. You will add a new MongoDB repository from official database site and install latest version.
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10 && echo 'deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen' | sudo tee /etc/apt/sources.list.d/mongodb.list && sudo apt-get update && sudo apt-get install mongodb-org

If everything goes well, you should get a similar output installation MongoDB such as below:
jason@localhost:~$ sudo apt-get install mongodb-org
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages were automatically installed and are no longer required:
jhead libcec2 libgdata-google1.2-1 libgdata1.2-1 libjdependency-java liblockdev1 libmaven-archiver-java libmaven-clean-plugin-java
libmaven-compiler-plugin-java libmaven-dependency-tree-java libmaven-filtering-java libmaven-install-plugin-java libmaven-jar-plugin-java
libmaven-resources-plugin-java libmaven-shade-plugin-java libphp-adodb libpigment0.3-11 libplexus-compiler-java libplexus-digest-java oxideqt-codecs-extra
php-auth-sasl php-cache php-date php-file php-http-request php-log php-mail php-mail-mime php-mdb2 php-mdb2-driver-mysql php-net-dime php-net-ftp
php-net-smtp php-net-socket php-net-url php-services-weather php-soap php-xml-parser php-xml-serializer printer-driver-c2esp printer-driver-min12xxw
printer-driver-pnm2ppa printer-driver-pxljr python-axiom python-coherence python-configobj python-epsilon python-gpod python-louie python-nevow python-pgm
python-pyasn1 python-storm python-tagpy python-twill python-twisted-conch python-twisted-web2 qtdeclarative5-window-plugin tinymce2 xbmc-pvr-argustv
xbmc-pvr-dvbviewer xbmc-pvr-mediaportal-tvserver xbmc-pvr-mythtv-cmyth xbmc-pvr-nextpvr xbmc-pvr-njoy xbmc-pvr-tvheadend-hts xbmc-pvr-vdr-vnsi
xbmc-pvr-vuplus xdg-user-dirs-gtk
Use 'apt-get autoremove' to remove them.
The following extra packages will be installed:
mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-tools
The following NEW packages will be installed:
mongodb-org mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-tools
0 upgraded, 5 newly installed, 0 to remove and 51 not upgraded.
Need to get 113 MB of archives.
After this operation, 284 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://downloads-distro.mongodb.org/repo/ubuntu-upstart/ dist/10gen mongodb-org-shell i386 2.6.1 [4,389 kB]
Get:2 http://downloads-distro.mongodb.org/repo/ubuntu-upstart/ dist/10gen mongodb-org-server i386 2.6.1 [9,308 kB]
Get:3 http://downloads-distro.mongodb.org/repo/ubuntu-upstart/ dist/10gen mongodb-org-mongos i386 2.6.1 [7,045 kB]
Get:4 http://downloads-distro.mongodb.org/repo/ubuntu-upstart/ dist/10gen mongodb-org-tools i386 2.6.1 [92.3 MB]
Get:5 http://downloads-distro.mongodb.org/repo/ubuntu-upstart/ dist/10gen mongodb-org i386 2.6.1 [3,652 B]
Fetched 113 MB in 3min 25s (549 kB/s)
Selecting previously unselected package mongodb-org-shell.
(Reading database ... 564794 files and directories currently installed.)
Preparing to unpack .../mongodb-org-shell_2.6.1_i386.deb ...
Unpacking mongodb-org-shell (2.6.1) ...
Selecting previously unselected package mongodb-org-server.
Preparing to unpack .../mongodb-org-server_2.6.1_i386.deb ...
Unpacking mongodb-org-server (2.6.1) ...
Selecting previously unselected package mongodb-org-mongos.
Preparing to unpack .../mongodb-org-mongos_2.6.1_i386.deb ...
Unpacking mongodb-org-mongos (2.6.1) ...
Selecting previously unselected package mongodb-org-tools.
Preparing to unpack .../mongodb-org-tools_2.6.1_i386.deb ...
Unpacking mongodb-org-tools (2.6.1) ...
Selecting previously unselected package mongodb-org.
Preparing to unpack .../mongodb-org_2.6.1_i386.deb ...
Unpacking mongodb-org (2.6.1) ...
Processing triggers for man-db (2.6.7.1-1) ...
Processing triggers for ureadahead (0.100.0-16) ...
Setting up mongodb-org-shell (2.6.1) ...
Setting up mongodb-org-server (2.6.1) ...
Adding system user `mongodb' (UID 143) ...
Adding new user `mongodb' (UID 143) with group `nogroup' ...
Not creating home directory `/home/mongodb'.
Adding group `mongodb' (GID 155) ...
Done.
Adding user `mongodb' to group `mongodb' ...
Adding user mongodb to group mongodb
Done.
mongod start/running, process 22386
Setting up mongodb-org-mongos (2.6.1) ...
Setting up mongodb-org-tools (2.6.1) ...
Processing triggers for ureadahead (0.100.0-16) ...
Setting up mongodb-org (2.6.1) ...

Looks like installation processed is done and fine. Even it is already started. So now let's play using mongo db command line.
jason@localhost:~$ mongo
MongoDB shell version: 2.6.1
connecting to: test
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
http://docs.mongodb.org/
Questions? Try the support group
http://groups.google.com/group/mongodb-user
Server has startup warnings:
2014-06-02T22:29:43.933+0800 [initandlisten]
2014-06-02T22:29:43.933+0800 [initandlisten] ** NOTE: This is a 32 bit MongoDB binary.
2014-06-02T22:29:43.933+0800 [initandlisten] ** 32 bit builds are limited to less than 2GB of data (or less with --journal).
2014-06-02T22:29:43.933+0800 [initandlisten] ** Note that journaling defaults to off for 32 bit and is currently off.
2014-06-02T22:29:43.933+0800 [initandlisten] ** See http://dochub.mongodb.org/core/32bit
2014-06-02T22:29:43.934+0800 [initandlisten]
>

As you can see, I'm running 32bit cpu, but it should work fine for 64bit cpu and the rest of this article. So everything has been smooth sailing so far, we will start to create, read, update and delete operation.

  • create




To create or insert a document, it is as easy as
db.inventory.insert( { _id: 10, type: "misc", item: "card", qty: 15 } )

More insert example
db.inventory.update(
{ type: "book", item : "journal" },
{ $set : { qty: 10 } },
{ upsert : true }
)

Interesting insert using save
db.inventory.save( { type: "book", item: "notebook", qty: 40 } )



  • read




to read or query document, it is as easy as
db.inventory.update(
{ type: "book", item : "journal" },
{ $set : { qty: 10 } },
{ upsert : true }
)

read more example here.




  • update


see create above for example.


  • delete




to remove all documents,
db.inventory.remove({})


That's it for this lengthy introduction.

Saturday, June 21, 2014

measure java object size using jamm

Often when you develop a java application, you want to measure how big the object occupied in the heap. There are many available tool, such as the SizeOf.jar but today we will take a look at jbellis/jamm. What is Jamm? Jamm provides MemoryMeter, a java agent to measure actual object memory use including JVM overhead.

It is very easy to use, get the source and build the jar. Below is the output of building jar .
jason@localhost:~/codes/jamm$ ant jar
Buildfile: /home/jason/codes/jamm/build.xml

ivy-download:
[echo] Downloading Ivy...
[mkdir] Created dir: /home/jason/codes/jamm/target
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/jason/codes/jamm/target/ivy-2.1.0.jar

ivy-init:
[mkdir] Created dir: /home/jason/codes/jamm/target/lib

ivy-retrieve-build:
[ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
[ivy:retrieve] :: loading settings :: url = jar:file:/home/jason/codes/jamm/target/ivy-2.1.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
[ivy:retrieve] :: resolving dependencies :: jamm#jamm;working@debby.e2e.serveftp.net
[ivy:retrieve] confs: [default]
[ivy:retrieve] found junit#junit;4.11 in public
[ivy:retrieve] found org.hamcrest#hamcrest-core;1.3 in public
[ivy:retrieve] downloading http://repo1.maven.org/maven2/junit/junit/4.11/junit-4.11-javadoc.jar ...
[ivy:retrieve] .................................................................................................................................................................................................................. (370kB)
[ivy:retrieve] .. (0kB)
[ivy:retrieve] [SUCCESSFUL ] junit#junit;4.11!junit.jar(javadoc) (2772ms)
[ivy:retrieve] downloading http://repo1.maven.org/maven2/junit/junit/4.11/junit-4.11.jar ...
[ivy:retrieve] ........................................................................................................................................... (239kB)
[ivy:retrieve] .. (0kB)
[ivy:retrieve] [SUCCESSFUL ] junit#junit;4.11!junit.jar (1725ms)
[ivy:retrieve] downloading http://repo1.maven.org/maven2/junit/junit/4.11/junit-4.11-sources.jar ...
[ivy:retrieve] ................................................................................................. (147kB)
[ivy:retrieve] .. (0kB)
[ivy:retrieve] [SUCCESSFUL ] junit#junit;4.11!junit.jar(source) (1403ms)
[ivy:retrieve] downloading http://repo1.maven.org/maven2/org/hamcrest/hamcrest-core/1.3/hamcrest-core-1.3.jar ...
[ivy:retrieve] ........................... (43kB)
[ivy:retrieve] .. (0kB)
[ivy:retrieve] [SUCCESSFUL ] org.hamcrest#hamcrest-core;1.3!hamcrest-core.jar (1363ms)
[ivy:retrieve] :: resolution report :: resolve 9107ms :: artifacts dl 7338ms
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 2 | 2 | 2 | 0 || 4 | 4 |
---------------------------------------------------------------------
[ivy:retrieve] :: retrieving :: jamm#jamm [sync]
[ivy:retrieve] confs: [default]
[ivy:retrieve] 3 artifacts copied, 0 already retrieved (431kB/40ms)

init:
[mkdir] Created dir: /home/jason/codes/jamm/target/classes
[mkdir] Created dir: /home/jason/codes/jamm/target/test/classes

build:
[echo] jamm: /home/jason/codes/jamm/build.xml
[javac] Compiling 3 source files to /home/jason/codes/jamm/target/classes
[javac] Note: /home/jason/codes/jamm/src/org/github/jamm/AlwaysEmptySet.java uses unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

jar:
[jar] Building jar: /home/jason/codes/jamm/target/jamm-0.2.7-SNAPSHOT.jar

BUILD SUCCESSFUL
Total time: 26 seconds

The jar is found in target/jamm-0.2.7-SNAPSHOT.jar . You can start testing the built jar using ant test. Below is the output.
jason@localhost:~/codes/jamm$ ant test
Buildfile: /home/jason/codes/jamm/build.xml

ivy-download:

ivy-init:

ivy-retrieve-build:
[ivy:retrieve] :: Ivy 2.1.0 - 20090925235825 :: http://ant.apache.org/ivy/ ::
[ivy:retrieve] :: loading settings :: url = jar:file:/home/jason/codes/jamm/target/ivy-2.1.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
[ivy:retrieve] :: resolving dependencies :: jamm#jamm;working@debby.e2e.serveftp.net
[ivy:retrieve] confs: [default]
[ivy:retrieve] found junit#junit;4.11 in public
[ivy:retrieve] found org.hamcrest#hamcrest-core;1.3 in public
[ivy:retrieve] :: resolution report :: resolve 266ms :: artifacts dl 23ms
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 2 | 0 | 0 | 0 || 4 | 0 |
---------------------------------------------------------------------
[ivy:retrieve] :: retrieving :: jamm#jamm [sync]
[ivy:retrieve] confs: [default]
[ivy:retrieve] 0 artifacts copied, 3 already retrieved (0kB/18ms)

init:

build:
[echo] jamm: /home/jason/codes/jamm/build.xml

jar:

build-test:
[javac] Compiling 2 source files to /home/jason/codes/jamm/target/test/classes
[javac] Note: /home/jason/codes/jamm/test/org/github/jamm/MemoryMeterTest.java uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: /home/jason/codes/jamm/test/org/github/jamm/MemoryMeterTest.java uses unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

checkos:

test-mac:

test:
[echo] running tests
[mkdir] Created dir: /home/jason/codes/jamm/target/test/output
[echo] Testing with default Java
[junit] Testsuite: org.github.jamm.GuessTest
[junit] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.869 sec
[junit]
[junit] Testsuite: org.github.jamm.MemoryMeterTest
[junit] Tests run: 13, Failures: 0, Errors: 0, Skipped: 3, Time elapsed: 0.813 sec
[junit]
[junit] Testcase: testMacOSX_i386(org.github.jamm.MemoryMeterTest):SKIPPED: got: "Linux", expected: is "Mac OS X"
[junit] Testcase: testMacOSX_x86_64(org.github.jamm.MemoryMeterTest):SKIPPED: got: "Linux", expected: is "Mac OS X"
[junit] Testcase: testCollections(org.github.jamm.MemoryMeterTest):SKIPPED: These vary quite radically depending on the JVM.

BUILD SUCCESSFUL
Total time: 14 seconds

Very easy and work out of the box. To even start to use, just import the class and start measure the object you interested in,
MemoryMeter meter = new MemoryMeter();
meter.measure(object);

but remember to add "-javaagent:<path to>/jamm.jar" to your classpath when you run your app.

Friday, June 20, 2014

Test subversion project using jenkins

In our last article, we learned basic of jenkins. If you do not know what is jenkins and how to install it, please read the article before continue on this. Today, we will learned by configured a project to be test by jenkins.

This article will focus using subversion. If you store code in git, jenkins support it too but you need to install git plugin for jenkins. The installation process is a few clicks only.

To configure the project to jenkins, point your browser to the jenkins server. The click on 'New Item'. This article continue with the first option 'Build a free-style software project' and as for the Item name, you can basically name after your project but if you have specific scope of test, you can also name it as such. The click next and browser is redirected to another page similar to the one below.



I will explain using the screenshot above. For obvious reason, I have to obfuscate certain part of image to protect party interest but you should get the idea easily. With field Description, you can fill additional information here. First four options, you can play around for it but for this simple project, I don't see the need for it. For field Advanced Project Options, you will most likely to start to use it once you get better understanding of jenkins. So we leave those untick as well.

Next, field Source Code Management, this is where you need to select your code repository. As mentioned earlier, we will select the radio button for Subversion. Field Repository URL must be fill it as you tell jenkins where to get your code from. Most likely you code is security protected and hence you should also provide access credential for jenkins to retrieve project codebase. For field Check-out Strategy, you can choose the strategy you like, for me, I just goes for Use 'svn update' as much as possible because there is no point to checkout everything everytime to build project.

So in order to trigger this project within jenkins, you can specify how you want to trigger it. You can also specified by ticking a few options you want to trigger the build process. For me, I like to trigger manually when I want to quickly test my project. Also, I have setup a periodic build that every friday evening at 11pm, the build will be kickstarted automatically.

Normally target is test, but it should be easily understandable if you develop using ant before. This is the target where jenkins will execute. So for your project, open up ant build file and check out the test target. I recommend you click on Advanced... button to see additional configurations which you might need to change. If you ant build file is on the same directory as you configured in Repository URL just now, then you will not need to modify. If you have special configurations which you need to feed into ant build file during jenkins build, specify in properties.

Last step, Post-build Actions, Click on the drop down button Add post-build action, you can add as many action you want but as a starter, a simple email notification would be suffice. That's it and remember to click Save button to save all your configuration!

Go to the dashboard, you should see now your project configured, in the content page, for your project, you click on a drop down button and select Build now, jenkins would check out your project and execute the test target. If you click on the project, you should be able to see the build history on the left menu. This should get you started and by now, you should get a feel on where to go further, so on the left menu, click on Configure and alter advanced configuration and see how it goes!

That's it for this article, I hope you like it.

Sunday, June 8, 2014

Initial study into apache hadoop single node cluster

If you have read big data article, you will definitely encountered the term, hadoop. Today, we are going to learn Apache Hadoop.

What is Apache Hadoop?

The Apache™ Hadoop® project develops open-source software for reliable, scalable, distributed computing.

The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.

There are links here and here further explain what is hadoop and its components.

I must admit to quickly setup and run single node cluster is difficult. Mainly because this is my first time learning hadoop and official documentation is not for the starter. So I google a few and got a few helpful links. The following setup is mainly for starter to get a feel on how it works. Sort of like hello world example of hadoop. As such goal is as simple as possible to get a feel of what it is.

Setup is a single node cluster, it work with current linux (debian) user environment and we can remove easily changes we've made after this tutorial. Note that example below is using my own username (jason), and it should work with your user ($HOME) environment too. User security is not a concern issue here as the objective is to learn the basic of hadoop here. A few system setup are needed and we start to prepare for the environment for hadoop.

Because this is a java library, a required JRE installed is needed. This article assume you have java installed and running. You can check it below. If you do not have java, google how to install JRE.
jason@localhost:~$ java -version
java version "1.7.0_55"
Java(TM) SE Runtime Environment (build 1.7.0_55-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.55-b03, mixed mode)

ssh daemon is required on your workstation. It is also recommend that openssh-client is installed as we will generate public and private key for automatic ssh login. Thus, apt-get install openssh-server openssh-client

Once both packages are installed, make sure sshd daemon is running and generate public and private key.
ssh-keygen -t rsa -P '' -f id_rsa_hadoop

with the above commands, we specified key type is rsa with empty passphrase so ssh will not prompt for passphrase and the key filename is id_rsa_hadoop. It's okay if you do not specify the key filename but because I have a few keys file, it is easy for me to identify and remove it later when this tutorial is done. The key should be available in your current user .ssh directory. To ensure ssh to localhost is automatic, echo your public key into authorized_keys file as a valid authorized key.
jason@localhost:~$ ls .ssh/
authorized_keys id_rsa id_rsa_hadoop id_rsa_hadoop.pub id_rsa.pub known_hosts

$ cat $HOME/.ssh/id_rsa_hadoop.pub >> $HOME/.ssh/authorized_keys

Right now if you ssh to localhost, you should logged without ssh asking for password in the terminal. That's it for the localhost setup. We will move on to the hadoop configuration.

Download a copy of hadoop. For this example, we are using hadoop version 2.4.0 . You can download it here. Then extract in the Desktop directory.
jason@localhost:~/Desktop$ tar -zxf hadoop-2.4.0.tar.gz
jason@localhost:~/Desktop$ cd hadoop-2.4.0
jason@localhost:~/Desktop/hadoop-2.4.0$ ls
bin etc include lib libexec LICENSE.txt logs NOTICE.txt README.txt sbin share

Then we will create directory for namenode and datanode.
jason@localhost:~/Desktop/hadoop-2.4.0$ pwd
/home/jason/Desktop/hadoop-2.4.0
jason@localhost:~/Desktop/hadoop-2.4.0$ mkdir -p hadoop_store/hdfs/namenode hadoop_store/hdfs/datanode

Then there are a few environment needed to be setup. Assuming you are using bash, enter the following into your .bashrc
#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_55
export HADOOP_INSTALL=/home/jason/Desktop/hadoop-2.4.0
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END

The only variable you need to take notice is JAVA_HOME and HADOOP_INSTALL. Once this is done, source immediately this file in your terminal as you will use the commands next.
jason@localhost:~/Desktop/hadoop-2.4.0$ source $HOME/.bashrc

We will now configured five xml properties files for hadoop, namely

  1. etc/hadoop/hadoop-env.sh

  2. etc/hadoop/core-site.xml

  3. etc/hadoop/hdfs-site.xml

  4. etc/hadoop/yarn-site.xml

  5. etc/hadoop/mapred-site.xml


It is assume you are still at current working directory such as below so you can easily edit the above files.
$ pwd
/home/jason/Desktop/hadoop-2.4.0

add the following content into etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_55

add the following contents into etc/hadoop/core-site.xml
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

add the following contents into etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/jason/Desktop/hadoop-2.4.0/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/jason/Desktop/hadoop-2.4.0/hadoop_store/hdfs/datanode</value>
</property>

add the following into etc/hadoop/yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

for file etc/hadoop/mapred-site.xml, you can start by copy from etc/hadoop/mapred-site.xml.template
jason@localhost:~/Desktop/hadoop-2.4.0$ cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml

then add the following into the file  etc/hadoop/mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>

Once it is done, that's it for the hadoop configuration and now run the command hdfs namenode -format . Below is the output in my terminal.
jason@localhost:~/Desktop/hadoop-2.4.0$ hdfs namenode -format
14/05/30 16:00:55 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = localhost/127.0.1.1
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.4.0
STARTUP_MSG: classpath = /home/jason/Desktop/hadoop-2.4.0/etc/hadoop:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/hadoop-annotations-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/asm-3.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/hadoop-auth-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-lang-2.6.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jettison-1.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/activation-1.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/xz-1.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-io-2.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0-tests.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/hadoop-nfs-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/common/hadoop-common-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/hdfs/hadoop-hdfs-2.4.0-tests.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-xc-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-jaxrs-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/zookeeper-3.4.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jettison-1.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/activation-1.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-common-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-client-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-common-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-api-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.4.0.jar:/home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.4.0.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG: build = http://svn.apache.org/repos/asf/hadoop/common -r 1583262; compiled by 'jenkins' on 2014-03-31T08:29Z
STARTUP_MSG: java = 1.7.0_55
************************************************************/
14/05/30 16:00:55 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
14/05/30 16:00:55 INFO namenode.NameNode: createNameNode [-format]
14/05/30 16:00:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-a15244a5-fea6-42ad-ab38-92b9730521f5
14/05/30 16:00:58 INFO namenode.FSNamesystem: fsLock is fair:true
14/05/30 16:00:58 INFO namenode.HostFileManager: read includes:
HostSet(
)
14/05/30 16:00:58 INFO namenode.HostFileManager: read excludes:
HostSet(
)
14/05/30 16:00:58 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
14/05/30 16:00:58 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
14/05/30 16:00:58 INFO util.GSet: Computing capacity for map BlocksMap
14/05/30 16:00:58 INFO util.GSet: VM type = 64-bit
14/05/30 16:00:58 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
14/05/30 16:00:58 INFO util.GSet: capacity = 2^21 = 2097152 entries
14/05/30 16:00:58 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
14/05/30 16:00:58 INFO blockmanagement.BlockManager: defaultReplication = 1
14/05/30 16:00:58 INFO blockmanagement.BlockManager: maxReplication = 512
14/05/30 16:00:58 INFO blockmanagement.BlockManager: minReplication = 1
14/05/30 16:00:58 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
14/05/30 16:00:58 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false
14/05/30 16:00:58 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
14/05/30 16:00:58 INFO blockmanagement.BlockManager: encryptDataTransfer = false
14/05/30 16:00:58 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
14/05/30 16:00:58 INFO namenode.FSNamesystem: fsOwner = jason (auth:SIMPLE)
14/05/30 16:00:58 INFO namenode.FSNamesystem: supergroup = supergroup
14/05/30 16:00:58 INFO namenode.FSNamesystem: isPermissionEnabled = true
14/05/30 16:00:58 INFO namenode.FSNamesystem: HA Enabled: false
14/05/30 16:00:58 INFO namenode.FSNamesystem: Append Enabled: true
14/05/30 16:00:59 INFO util.GSet: Computing capacity for map INodeMap
14/05/30 16:00:59 INFO util.GSet: VM type = 64-bit
14/05/30 16:00:59 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
14/05/30 16:00:59 INFO util.GSet: capacity = 2^20 = 1048576 entries
14/05/30 16:00:59 INFO namenode.NameNode: Caching file names occuring more than 10 times
14/05/30 16:00:59 INFO util.GSet: Computing capacity for map cachedBlocks
14/05/30 16:00:59 INFO util.GSet: VM type = 64-bit
14/05/30 16:00:59 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
14/05/30 16:00:59 INFO util.GSet: capacity = 2^18 = 262144 entries
14/05/30 16:00:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
14/05/30 16:00:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
14/05/30 16:00:59 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
14/05/30 16:00:59 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
14/05/30 16:00:59 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
14/05/30 16:00:59 INFO util.GSet: Computing capacity for map NameNodeRetryCache
14/05/30 16:00:59 INFO util.GSet: VM type = 64-bit
14/05/30 16:00:59 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
14/05/30 16:00:59 INFO util.GSet: capacity = 2^15 = 32768 entries
14/05/30 16:00:59 INFO namenode.AclConfigFlag: ACLs enabled? false
14/05/30 16:01:00 INFO namenode.FSImage: Allocated new BlockPoolId: BP-908722954-127.0.1.1-1401436859922
14/05/30 16:01:00 INFO common.Storage: Storage directory /home/jason/Desktop/hadoop-2.4.0/hadoop_store/hdfs/namenode has been successfully formatted.
14/05/30 16:01:01 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
14/05/30 16:01:01 INFO util.ExitUtil: Exiting with status 0
14/05/30 16:01:01 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.1.1
************************************************************/

With this output, you should not see any error. Okay, all good and now, start the engine!
jason@localhost:~/Desktop/hadoop-2.4.0$ start-dfs.sh && start-yarn.sh
14/05/30 16:04:37 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /home/jason/Desktop/hadoop-2.4.0/logs/hadoop-jason-namenode-localhost.out
localhost: starting datanode, logging to /home/jason/Desktop/hadoop-2.4.0/logs/hadoop-jason-datanode-localhost.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/jason/Desktop/hadoop-2.4.0/logs/hadoop-jason-secondarynamenode-localhost.out
14/05/30 16:05:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/jason/Desktop/hadoop-2.4.0/logs/yarn-jason-resourcemanager-localhost.out
localhost: starting nodemanager, logging to /home/jason/Desktop/hadoop-2.4.0/logs/yarn-jason-nodemanager-localhost.out
jason@localhost:~/Desktop/hadoop-2.4.0$

So you can check using jps if your hadoop is running. The expected hadoop processes are ResourceManager, SecondaryNameNode, NameNode, NodeManager and DataNode.
jason@localhost:~$ jps
22701 ResourceManager
22512 SecondaryNameNode
22210 NameNode
22800 NodeManager
6728 org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar
22840 Jps
22326 DataNode

You can access apache hadoop via the web interfaces:

Cluster status: http://localhost:8088
HDFS status: http://localhost:50070
Secondary NameNode status: http://localhost:50090

So that's looks good, everything is configured and now it is running fine. So we will continue by running a few examples.
jason@localhost:~/Desktop/hadoop-2.4.0$ hadoop jar /home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar TestDFSIO -write -nrFiles 20 -fileSize 10
14/05/30 16:10:54 INFO fs.TestDFSIO: TestDFSIO.1.7
14/05/30 16:10:54 INFO fs.TestDFSIO: nrFiles = 20
14/05/30 16:10:54 INFO fs.TestDFSIO: nrBytes (MB) = 10.0
14/05/30 16:10:54 INFO fs.TestDFSIO: bufferSize = 1000000
14/05/30 16:10:54 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
14/05/30 16:10:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/05/30 16:10:57 INFO fs.TestDFSIO: creating control file: 10485760 bytes, 20 files
14/05/30 16:11:01 INFO fs.TestDFSIO: created control files for: 20 files
14/05/30 16:11:01 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/05/30 16:11:01 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/05/30 16:11:04 INFO mapred.FileInputFormat: Total input paths to process : 20
14/05/30 16:11:04 INFO mapreduce.JobSubmitter: number of splits:20
14/05/30 16:11:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1401437120030_0001
14/05/30 16:11:06 INFO impl.YarnClientImpl: Submitted application application_1401437120030_0001
14/05/30 16:11:06 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1401437120030_0001/
14/05/30 16:11:06 INFO mapreduce.Job: Running job: job_1401437120030_0001
14/05/30 16:11:28 INFO mapreduce.Job: Job job_1401437120030_0001 running in uber mode : false
14/05/30 16:11:28 INFO mapreduce.Job: map 0% reduce 0%
14/05/30 16:12:30 INFO mapreduce.Job: map 7% reduce 0%
14/05/30 16:12:31 INFO mapreduce.Job: map 17% reduce 0%
14/05/30 16:12:34 INFO mapreduce.Job: map 23% reduce 0%
14/05/30 16:12:36 INFO mapreduce.Job: map 28% reduce 0%
14/05/30 16:12:37 INFO mapreduce.Job: map 30% reduce 0%
14/05/30 16:13:36 INFO mapreduce.Job: map 33% reduce 0%
14/05/30 16:13:39 INFO mapreduce.Job: map 40% reduce 0%
14/05/30 16:13:40 INFO mapreduce.Job: map 42% reduce 0%
14/05/30 16:13:42 INFO mapreduce.Job: map 52% reduce 0%
14/05/30 16:13:43 INFO mapreduce.Job: map 55% reduce 0%
14/05/30 16:13:44 INFO mapreduce.Job: map 58% reduce 0%
14/05/30 16:13:45 INFO mapreduce.Job: map 60% reduce 0%
14/05/30 16:14:47 INFO mapreduce.Job: map 67% reduce 2%
14/05/30 16:14:50 INFO mapreduce.Job: map 75% reduce 2%
14/05/30 16:14:51 INFO mapreduce.Job: map 78% reduce 22%
14/05/30 16:14:53 INFO mapreduce.Job: map 82% reduce 22%
14/05/30 16:14:54 INFO mapreduce.Job: map 85% reduce 22%
14/05/30 16:14:55 INFO mapreduce.Job: map 85% reduce 28%
14/05/30 16:15:37 INFO mapreduce.Job: map 88% reduce 28%
14/05/30 16:15:40 INFO mapreduce.Job: map 93% reduce 28%
14/05/30 16:15:42 INFO mapreduce.Job: map 95% reduce 32%
14/05/30 16:15:44 INFO mapreduce.Job: map 100% reduce 32%
14/05/30 16:15:45 INFO mapreduce.Job: map 100% reduce 67%
14/05/30 16:15:47 INFO mapreduce.Job: map 100% reduce 100%
14/05/30 16:15:49 INFO mapreduce.Job: Job job_1401437120030_0001 completed successfully
14/05/30 16:15:50 INFO mapreduce.Job: Counters: 50
File System Counters
FILE: Number of bytes read=1673
FILE: Number of bytes written=1965945
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=4720
HDFS: Number of bytes written=209715278
HDFS: Number of read operations=83
HDFS: Number of large read operations=0
HDFS: Number of write operations=22
Job Counters
Killed map tasks=3
Launched map tasks=23
Launched reduce tasks=1
Data-local map tasks=23
Total time spent by all maps in occupied slots (ms)=1319128
Total time spent by all reduces in occupied slots (ms)=124593
Total time spent by all map tasks (ms)=1319128
Total time spent by all reduce tasks (ms)=124593
Total vcore-seconds taken by all map tasks=1319128
Total vcore-seconds taken by all reduce tasks=124593
Total megabyte-seconds taken by all map tasks=1350787072
Total megabyte-seconds taken by all reduce tasks=127583232
Map-Reduce Framework
Map input records=20
Map output records=100
Map output bytes=1467
Map output materialized bytes=1787
Input split bytes=2470
Combine input records=0
Combine output records=0
Reduce input groups=5
Reduce shuffle bytes=1787
Reduce input records=100
Reduce output records=5
Spilled Records=200
Shuffled Maps =20
Failed Shuffles=0
Merged Map outputs=20
GC time elapsed (ms)=14063
CPU time spent (ms)=127640
Physical memory (bytes) snapshot=5418561536
Virtual memory (bytes) snapshot=14516457472
Total committed heap usage (bytes)=4196401152
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=2250
File Output Format Counters
Bytes Written=78
14/05/30 16:15:50 INFO fs.TestDFSIO: ----- TestDFSIO ----- : write
14/05/30 16:15:50 INFO fs.TestDFSIO: Date & time: Fri May 30 16:15:50 MYT 2014
14/05/30 16:15:50 INFO fs.TestDFSIO: Number of files: 20
14/05/30 16:15:50 INFO fs.TestDFSIO: Total MBytes processed: 200.0
14/05/30 16:15:50 INFO fs.TestDFSIO: Throughput mb/sec: 1.6888468553671554
14/05/30 16:15:50 INFO fs.TestDFSIO: Average IO rate mb/sec: 1.840719223022461
14/05/30 16:15:50 INFO fs.TestDFSIO: IO rate std deviation: 0.7043729046488437
14/05/30 16:15:50 INFO fs.TestDFSIO: Test exec time sec: 289.58
14/05/30 16:15:50 INFO fs.TestDFSIO:

clean the project.
jason@localhost:~/Desktop/hadoop-2.4.0$ hadoop jar /home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.4.0-tests.jar TestDFSIO -clean
14/05/30 16:20:03 INFO fs.TestDFSIO: TestDFSIO.1.7
14/05/30 16:20:03 INFO fs.TestDFSIO: nrFiles = 1
14/05/30 16:20:03 INFO fs.TestDFSIO: nrBytes (MB) = 1.0
14/05/30 16:20:03 INFO fs.TestDFSIO: bufferSize = 1000000
14/05/30 16:20:03 INFO fs.TestDFSIO: baseDir = /benchmarks/TestDFSIO
14/05/30 16:20:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/05/30 16:20:06 INFO fs.TestDFSIO: Cleaning up test files

another job example.
jason@localhost:~/Desktop/hadoop-2.4.0$ hadoop jar /home/jason/Desktop/hadoop-2.4.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.4.0.jar pi 2 5
Number of Maps = 2
Samples per Map = 5
14/05/30 16:21:18 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
14/05/30 16:21:23 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/05/30 16:21:25 INFO input.FileInputFormat: Total input paths to process : 2
14/05/30 16:21:26 INFO mapreduce.JobSubmitter: number of splits:2
14/05/30 16:21:27 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1401437120030_0002
14/05/30 16:21:28 INFO impl.YarnClientImpl: Submitted application application_1401437120030_0002
14/05/30 16:21:28 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1401437120030_0002/
14/05/30 16:21:28 INFO mapreduce.Job: Running job: job_1401437120030_0002
14/05/30 16:21:53 INFO mapreduce.Job: Job job_1401437120030_0002 running in uber mode : false
14/05/30 16:21:53 INFO mapreduce.Job: map 0% reduce 0%
14/05/30 16:22:18 INFO mapreduce.Job: map 100% reduce 0%
14/05/30 16:22:34 INFO mapreduce.Job: map 100% reduce 100%
14/05/30 16:22:35 INFO mapreduce.Job: Job job_1401437120030_0002 completed successfully
14/05/30 16:22:36 INFO mapreduce.Job: Counters: 49
File System Counters
FILE: Number of bytes read=50
FILE: Number of bytes written=280470
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=530
HDFS: Number of bytes written=215
HDFS: Number of read operations=11
HDFS: Number of large read operations=0
HDFS: Number of write operations=3
Job Counters
Launched map tasks=2
Launched reduce tasks=1
Data-local map tasks=2
Total time spent by all maps in occupied slots (ms)=46538
Total time spent by all reduces in occupied slots (ms)=13821
Total time spent by all map tasks (ms)=46538
Total time spent by all reduce tasks (ms)=13821
Total vcore-seconds taken by all map tasks=46538
Total vcore-seconds taken by all reduce tasks=13821
Total megabyte-seconds taken by all map tasks=47654912
Total megabyte-seconds taken by all reduce tasks=14152704
Map-Reduce Framework
Map input records=2
Map output records=4
Map output bytes=36
Map output materialized bytes=56
Input split bytes=294
Combine input records=0
Combine output records=0
Reduce input groups=2
Reduce shuffle bytes=56
Reduce input records=4
Reduce output records=0
Spilled Records=8
Shuffled Maps =2
Failed Shuffles=0
Merged Map outputs=2
GC time elapsed (ms)=631
CPU time spent (ms)=7890
Physical memory (bytes) snapshot=623665152
Virtual memory (bytes) snapshot=2097958912
Total committed heap usage (bytes)=559939584
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=236
File Output Format Counters
Bytes Written=97
Job Finished in 73.196 seconds
Estimated value of Pi is 3.60000000000000000000

You can also create file and save on hadoop. You can read more at http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/FileSystemShell.html
jason@localhost:~$ hadoop fs -mkdir -p /user/hduser
14/05/30 16:27:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
jason@localhost:~$ hadoop fs -copyFromLocal dummy.txt dummy.txt
14/05/30 16:27:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
jason@localhost:~$ hadoop fs -ls
14/05/30 16:28:10 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 1 jason supergroup 13 2014-05-30 16:27 dummy.txt
jason@localhost:~$ hadoop fs -cat /user/hduser/dummy.txt
14/05/30 16:29:00 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
cat: `/user/hduser/dummy.txt': No such file or directory
jason@localhost:~$ hadoop fs -cat /user/jason/dummy.txt
14/05/30 16:29:11 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hello world.
jason@localhost:~$ hadoop fs -ls /
14/05/30 16:29:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
drwxr-xr-x - jason supergroup 0 2014-05-30 16:20 /benchmarks
drwx------ - jason supergroup 0 2014-05-30 16:11 /tmp
drwxr-xr-x - jason supergroup 0 2014-05-30 16:27 /user
jason@localhost:~$ hadoop fs -rm dummy.txt
14/05/30 16:29:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/05/30 16:29:54 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted dummy.txt
jason@localhost:~$ hadoop fs -ls
14/05/30 16:30:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
jason@localhost:~$

Once you are done with hadoop cluster, you can shut it down using stop-dfs.sh && stop-yarn.sh
jason@localhost:~/Desktop/hadoop-2.4.0$ stop-dfs.sh && stop-yarn.sh
14/05/30 17:51:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: stopping datanode
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
14/05/30 17:51:25 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
stopping yarn daemons
stopping resourcemanager
localhost: stopping nodemanager
no proxyserver to stop

You can remove/revert the changes you made for this tutorial.

/home/jason/Desktop/hadoop-2.4.0
/home/jason/.ssh/id_rsa_hadoop.pub
/home/jason/.ssh/id_rsa_hadoop
/home/jason/.ssh/authorized_keys
/home/jason/.bashrc

That's it for this lengthy article, hope you like it and if you learn something , remember to donate to us too!