Hadoop As definition it is application written in java, which enable the distributed computing on very large volume of data sets runs across commodity hardware.
Father of Hadoop is Doug cutting, who actually got the idea from Google file system. Doug started the project few years back. Now Hadoop is very well popular and Hot in the market. Peoples are learning, companies are take it up and start using in many areas as we have seen in last Blog.
1st hadoop was the Apache Hadoop, As we all aware of ASF(Apache Software foundation), After Doug Apache did the most of patching and make it well versed. Now there are many venders who are providing their hadoop Versions. The base is always the Apache hadoop.
Different Hadoop venders in the market now a days.
- Apache Hadoop
- Cloudera Hadoop
- Hortonworks HDP
- Datameer
- KarmaSphere
- MapR
- IBM BigInsight
- AWS
- IDH (Intel distribution of hadoop)
- EMC GreenPlum
Lets talk about the Hadoop Ecosystem, Hadoop Ecosystem Consist of Different Components , they all are top notch projects in ASF. I am listing them below and give a brief intro about them
Hdfs :-Its Hadoop distributed file system, very reliable, fault tolerant , high performance and Scalable and to facilitate the data to store on different commodity hardware. It have larger block size then the normal filesystems. It is written in java.
Hive :- It is basically an interface on the top of hadoop to workout with the data files in the tabular form. Hive is sql based Dwh System (data-ware house) to facilitate the data symmetrization , data query and Analysis of Large data set , stored in hdfs.
Pig :- Pig is a platform for constructing data flows for extract, transform, and load (ETL) processing and analysis of large datasets. Pig Latin, the programming language for Pig provides common data manipulation operations, such as grouping, joining, and filtering. Pig generates Hadoop MapReduce jobs to perform the data flows.
MapReduce :- Map-reduce is a data processing paradigm for condensing large volumes of data into useful aggregated results. It is now the most widely-used, general-purpose computing model and runtime system for distributed data analytic.
Hbase :- Its a columnar database, able to store millions of column and billions of rows. Its installed on the top of hadoop and stores structured and non structured data. Stored data as key and value format. Major thing is it provide the update value facility, high performance with very low latency.
It stores everything as bytes.
Sqoop :- Sqoop is basically designed for users to import and export the data from relational database to their hadoop clusters. Its generate Mapreduce in the background to do the import and Export. It can also do the import from the many databases at once.
Flume :- Flume is very reliable, efficient, distributed system to collect the logs from different source to store into hadoop cluster on real time. It has Simple and Flexible Architecture based on Streaming Data flows.
Zookeeper :- As the name suggest , it actually controls the all the animals of hadoop ecosystem in the zoo. As it is basically a coordination system for distributed applications. It is Centralized Service for maintaining configuration information, distributed synchronization , naming and provide group services.
Oozie :- Oozie is server based workflow engine specialized in running workflow jobs with actions that executed mapred and pig jobs. Oozie Provide the abstraction that will batch a set of coordination applications. It give the power to user to start/stop/pause/resume to set of jobs.
Mahout :- Mahout is tool to implement the different analytics on hadoop data. It able to perform machine learning Algorithms on Hadoop Filesystem. Through mahout you can do the recommendation , data mining and Clustering etc.
Avro :- Avro is Serialization system which provide the dynamic integration with many scripting languages like python, ruby etc. It supports different file format and may text encoding.
Chukwa :- Its data collection system for managing large distributed systems, it facilitate to display, monitoring and analyzing the log files.
There are many more, which i will describe in my later posts.
Thanks All