MapReduce Unwinding … Reduce

MapReduce Unwinding … Sort & Shuffle

MapReduce Unwinding. . . . . Map

MapReduce Unwinding. . . . . .Algorithm

With discussion, in my last blog, about “How Hadoop manages Fault Tolerance” within its cluster while processing data, it is now time to discuss the algorithm which MapReduce used to process these data. It is Name Node (NN) where a user submits his request to process data and submits his data files.  As soon as NN receives data Read more about MapReduce Unwinding. . . . . .Algorithm[…]

MapReduce Unwinding. . . . . . Fault Tolerance

Before we see the intermediate data produced by the mapper, it would be quite interesting to see the fault tolerant aspects of Hadoop with respect to MapReduce processing. Once Name node (NN) received data files which has to be processed, it splits data files to assign it to Data Node (DN).  This assignment would be Read more about MapReduce Unwinding. . . . . . Fault Tolerance[…]

MapReduce Unwinding. . . . . Philosophy

The philosophy of Map Reduce workings is straight forward and can be summarized in 6 steps. Whatever data we provide as input to Hadoop, it first splits these data into smaller no of pieces. Typically, the size of data splitted is limited to 64MB.  If a file of 1 TB is arrived to process on data node, Read more about MapReduce Unwinding. . . . . Philosophy[…]

MapReduce : Internals

MapReduce is a programing paradigm which provide an interface for developers to map end user requirements (any type of analysis on data) to code.  This framework is one of the core component of Hadoop.  The way it provides fault tolerant and massive scalability across hundreds or thousands of servers in a cluster for processing of Read more about MapReduce : Internals[…]

HDFS Architecture Explained

Inspired from Google File System which was developed using C++ during 2003 by Google to enhance its search engine, Hadoop Distributed File System (HDFS), a Java based file system, becomes the core components of Hadoop. With its fault tolerant and self healing features, HDFS enables Hadoop to harness the true capability of distributed processing techniques by turning Read more about HDFS Architecture Explained[…]

MAGIC OF HADOOP

Because of the limitation of currently available Enterprise data warehousing tools, Organizations were not able to consolidate their data at one place to maintain faster data processing.  Traditional ETL tools may take hours, days and sometimes even weeks.  Performances of these tools are limited by two Hardware limitations. The vertical hardware scalability:   Hardware can be scaled Read more about MAGIC OF HADOOP[…]

Journey of Hadoop

At the outset of twenty-first century, somewhere 1999-2000, due to increasing popularity of XML and JAVA, internet was evolving faster than ever.  As the world wide web grew at dizzying pace, though current search engine technologies were working fine, a better open source search engine was the need of the hour to cater the future Read more about Journey of Hadoop[…]

Big Data: An Introduction

Innovations in technologies made the resources cheaper than earlier.  This enables organizations to store more data at lower cost and thus increasing the size of data.  Gradually it becomes bigger and now it moves from Megabytes (MB) to Petabytes (1e+9 MB).  This huge increase in data requires some different kind of processing and ways of Read more about Big Data: An Introduction[…]