When you don’t want to import the whole table, instead just the newly added or altered rows of the table then you can use incremental import feature of Sqoop. This saves considerable resources. It periodically syncs the table to the HDFS. There are various ways to do that. Sqoop supports […]
Month: August 2016
Writing Custom Combiner in MapReduce
Combiner function is used as an optimization technique for MapReduce jobs. Combiner class combines/reduce the data generated by Mappers before it gets transferred to the Reducers. In previous post, you learned about how combiner works in MapReduce programming. In most of cases you can use Reducer class as Combiner class. […]
How Combiner works in Hadoop MapReduce
Hadoop is a framework used for handling Big Data. It uses HDFS as the distributed storage mechanism and MapReduce as the parallel processing paradigm for data residing in HDFS. The key components of Mapreduce are Mapper and Reducer. When a MapReduce Job runs on a large dataset, Mappers generate large […]