Hadoop Administrators

Cloud Technologies (Azure/AWS/GCP), Hadoop, MapReduce, Yarn, Airflow, NiFi, Spark, Ambari, Python, Hive, Kubernetes, Jenkins, Linux, Shell Scripting, Audit Trail
Description

GSPANN is looking for seasoned Hadoop Administrators. As we march ahead on a tremendous growth trajectory, we seek passionate and talented professionals to join our growing family.

Who We Are

GSPANN has been in business for over a decade, with over 2000 employees worldwide, and servicing some of the largest retail, high technology, and manufacturing clients in North America. We provide an environment that enables career growth while still interacting with company leadership.

Visit Why GSPANN for more information.

Location: Hyderabad / Gurugram / Pune
Role Type: Full Time
Published On: 12 April 2023
Experience: 5+ Years
Description
GSPANN is looking for seasoned Hadoop Administrators. As we march ahead on a tremendous growth trajectory, we seek passionate and talented professionals to join our growing family.
Role and Responsibilities
  • Configure various property files like coresite.xml, hdfssite.xml, mapredsite.xml based on the job requirement.
  • Manage and review Hadoop log files.
  • Conduct performance tuning of Hadoop cluster and Hadoop jobs.
  • Take complete ownership of disk space management and monitoring.
  • Perform data balancing on clusters.
  • Manage Hadoop Distributed File System (HDFS) cluster users and permissions.
  • Analyze system failures, identify root causes, and recommend a course of action. Document the systems processes and procedures for future reference.
  • Troubleshoot application errors and ensure that they do not occur again.
  • Configure NameNode to ensure high availability.
  • Analyze storage data volume and assign space in HDFS.
  • Manage software and hardware deployment in the Hadoop ecosystem and the expansion of existing ones.
  • Conduct implementation in a Hadoop cluster and its maintenance.
  • Deploy and manage Hadoop infrastructure on a current basis.
  • Install Hadoop on Linux.
  • Monitor the Hadoop cluster to check whether it is up-to-date and is constantly running.
  • Manage resources in a cluster ecosystem – conduct new node development and eradication of non-functioning ones.
  • Screen Hadoop cluster job performances and capacity planning.
  • Monitor Hadoop cluster connectivity and security.
  • Manage and review Hadoop log files.
  • Monitor file system management in the team.
Skills and Experience
  • Mandatory 2+ years of production support (L2/L3) experience in DevOps and cloud/multi-cloud.
  • Hands-on experience in at least one of the cloud technologies like AWS, Azure, or GCP.
  • Should have good knowledge of Hadoop components like Airflow, Spark, Yarn, Hive, HBase, Zookeeper, HDFS, MapReduce, etc.
  • Should know the Hadoop implementation procedure, which is responsible for maintaining the Hadoop clusters running seamlessly in production. They are in charge of clusters and other resources in the Hadoop ecosystem.
  • Must have a good understanding of core concepts of Windows and Linux for administration and troubleshooting.
  • Should be adept in configuration management: Ansible or Terraform.
  • Proficiency in CI/CD, Jenkins or similar would be an advantage.
  • Good understanding of support delivery processes/guidelines like problem management, incident management, change management, SLA compliance, productivity, and other application goals.
  • Should be open for rotational shifts.
  • Must have good communication, analysis, and debugging skills.

Key Details

Location: Hyderabad / Gurugram / Pune
Role Type: Full Time
Published On: 12 April 2023
Experience: 5+ Years

Apply Now