Head start with Hadoop

So the very basic question raised in our mind that what exactly this thing is...are we really talking about any technology or I have reached somewhere else. Well no worries you are in to right place and further this blog will explain various things about hadoop, like what Hadoop exactly is, how it originated, how it is helping today's major internet giants to successfully accomplish their task and computing needs and many more things like how it works and about its enormous capabilities.

Where did Hadoop come from?

So, the underlying technology of hadoop was basically invented by Google for their bulk data processing, crawling web . And this underlying technologies were MapReduce and GFS (Google file system). There was nothing on the market that would let them do that, so they built their own platform. Google’s innovations were incorporated into Nutch, an open source project, and Hadoop was later spun-off from that. Yahoo has played a key role developing Hadoop for enterprise applications. Hadoop is an open source project developed by Doug Cutting, an engineer at Yahoo.

What problems can Hadoop solve?

The Hadoop platform was designed to solve problems where you have a lot of data — perhaps a mixture of complex and structured data — and it doesn’t fit nicely into tables. It’s for situations where you want to run analytics that are deep and computationally extensive, like clustering and targeting. That’s exactly what Google was doing when it was indexing the web and examining user behavior to improve performance algorithms.
Hadoop applies to a bunch of markets.Like in finance, if you want to do accurate portfolio evaluation and risk analysis, you can build sophisticated models that are hard to jam into a database engine. But Hadoop can handle it. In online retail, if you want to deliver better search answers to your customers so they’re more likely to buy the thing you show them, that sort of problem is well addressed by the platform Google built. Those are just a few examples.

How is Hadoop architected?

Hadoop is designed to run on a large number of machines that don’t share any memory or disks. That means you can buy a whole bunch of commodity servers, push those into a rack, and run the Hadoop software on each one. When you want to load all of your organization’s data into Hadoop, what the software does is bust that data into pieces that it then spreads across your different servers. There’s no one place where you go to talk to all of your data; Hadoop keeps track of where the data resides. And because there are multiple copy stores, data stored on a server that goes offline or dies can be automatically replicated from a known good copy.
In a centralized database system, you’ve got one big disk connected to four or eight or 16 big processors. But that is as much horsepower as you can bring to bear. In a Hadoop cluster, every one of those servers has two or four or eight CPUs. You can run your indexing job by sending your code to each of the dozens of servers in your cluster, and each server operates on its own little piece of the data. Results are then delivered back to you in a unified whole. That’s MapReduce: you map the operation out to all of those servers and then you reduce the results back into a single result set. I will put a further article on Map and Reduce that how it actually works.
Architecturally, the reason you’re able to deal with lots of data is because Hadoop spreads it out. And the reason you’re able to ask complicated computational questions is because you’ve got all of these processors, working in parallel, harnessed together, to complete your BIG job and process that large amount of data.
Yes that's only a beginning, Further to work on Hadoop System you need to be aware of following terms
Basic working of Hadoop:
In a traditional non distributed architecture, you’ll have data stored in one server and any client program will access this central data server to retrieve the data. The non distributed model has few fundamental issues. In this model, you’ll mostly scale vertically by adding more CPU, adding more storage, etc. This architecture is also not reliable, as if the main server fails, you have to go back to the backup to restore the data. From performance point of view, this architecture will not provide the results faster when you are running a query against a huge data set.
In a hadoop distributed architecture, both data and processing are distributed across multiple servers. The following are some of the key points to remember about the hadoop:
  • Each and every server offers local computation and storage. i.e When you run a query against a large data set, every server in this distributed architecture will be executing the query on its local machine against the local data set. Finally, the resultset from all this local servers are consolidated.
  • In simple terms, instead of running a query on a single server, the query is split across multiple servers, and the results are consolidated. This means that the results of a query on a larger dataset are returned faster.
  • You don’t need a powerful server. Just use several less expensive commodity servers as hadoop individual nodes.
  • High fault-tolerance. If any of the nodes fails in the hadoop environment, it will still return the dataset properly, as hadoop takes care of replicating and distributing the data efficiently across the multiple nodes.
  • A simple hadoop implementation can use just two servers. But you can scale up to several thousands of servers without any additional effort.
  • Hadoop is written in Java. So, it can run on any platform.
  • Please keep in mind that hadoop is not a replacement for your RDBMS. You’ll typically use hadoop for unstructured data
  • Originally Google started using the distributed computing model based on GFS (Google Filesystem) and MapReduce. Later Nutch (open source web search software) was rewritten using MapReduce. Hadoop was branced out of Nutch as a separate project. Now Hadoop is a top-level Apache project that has gained tremendous momentum and popularity in recent years.


HDFS stands for Hadoop Distributed File System, which is the storage system used by Hadoop. The following is a high-level architecture that explains how HDFS works.
The following are some of the key points to remember about the HDFS:
  • In the above diagram, there is one NameNode, and multiple DataNodes (servers). b1, b2, indicates data blocks.
  • When you dump a file (or data) into the HDFS, it stores them in blocks on the various nodes in the hadoop cluster. HDFS creates several replication of the data blocks and distributes them accordingly in the cluster in way that will be reliable and can be retrieved faster. A typical HDFS block size is 128MB. Each and every data block is replicated to multiple nodes across the cluster.
  • Hadoop will internally make sure that any node failure will never results in a data loss.
  • There will be one NameNode that manages the file system metadata
  • There will be multiple DataNodes (These are the real cheap commodity servers) that will store the data blocks
  • When you execute a query from a client, it will reach out to the NameNode to get the file metadata information, and then it will reach out to the DataNodes to get the real data blocks
  • Hadoop provides a command line interface for administrators to work on HDFS
  • The NameNode comes with an in-built web server from where you can browse the HDFS filesystem and view some basic cluster statistics


The following are few of the key points to remember about the HDFS:
  • Map Reduce is a parallel programming model that is used to retrieve the data from the Hadoop cluster
  • In this model, the library handles lot of messy details that programmers doesn’t need to worry about. For example, the library takes care of parallelization, fault tolerance, data distribution, load balancing, etc.
  • This splits the tasks and executes on the various nodes parallely, thus speeding up the computation and retriving required data from a huge dataset in a fast manner.
  • This provides a clear abstraction for programmers. They have to just implement (or use) two functions: map and reduce
  • The data are fed into the map function as key value pairs to produce intermediate key/value pairs
  • Once the mapping is done, all the intermediate results from various nodes are reduced to create the final output
  • JobTracker keeps track of all the MapReduces jobs that are running on various nodes. This schedules the jobs, keeps track of all the map and reduce jobs running across the nodes. If any one of those jobs fails, it reallocates the job to another node, etc. In simple terms, JobTracker is responsible for making sure that the query on a huge dataset runs successfully and the data is returned to the client in a reliable manner.
  • TaskTracker performs the map and reduce tasks that are assigned by the JobTracker. TaskTracker also constantly sends a hearbeat message to JobTracker, which helps JobTracker to decide whether to delegate a new task to this particular node or not.
This was only first look into Hadoop, Many more terms to come...stay tune for further Hadoop insights.

Popular posts from this blog

Grsecurity: Configure RHEL5/6 Kernel for Grsecurity

Linux Command line tips and Bash stuff

Using MTS MBlaze on Mint Linux 14 / Ubuntu 12.04