Course Overview
Big Data course is designed to give newbie’s a quick start in Big Data world and insightful knowledge to candidates interested in moving beyond Hadoop. The course content is designed for candidates interested in certification or wants to make their profile in Big Data domain. It will provide you knowledge of various Hadoop ecosystems Hive, Pig, Flume, Sqoop, Zookeeper and most demanding streaming framework Apache Spark. This course will cover both real as well as batch processing frameworks like MapReduce and Spark. It also covers an overview of trending big-data platforms like Hortonwork’s and Cloudera’s.

Course Objectives

At ProTechSkills we cover a major section of Hadoop and its ecosystem components. At the end of the course you would learn:

  1. Virtualization and Cluster creation
  2. Hadoop HDFS, and MapReduce
  3. Next Generation Resource Manager : YARN
  4. Writing SQL queries using Hive
  5. Writing PIG scripts to query data.
  6. Executing Sqoop to port data from RDBMS to HDFS
  7. Using Flume to inject real time data
  8. Configuring Zookeeper to achieve HDFS High Availability
  9. Working with Hortonworks and Cloudera
  10. Real-time data processing using Spark

Who should go for this course?

Based on the current profile you can decide to opt any of the following profiles in Big Data Hadoop:

Profile
Best Suited For
Big-data Administrator
Unix/Linux Administrator, System Administrator
Big-data Developer
Java/ Python/.Net developers
Data Analyst
Data/Business Analyst, Data Warehousing

Course Duration

36-40 hours (2 Months Program – Weekend Only)

Course Structure

BigData_Course_Details