Big Data Engineers

Big Data, PySpark, Spark, AWS, Azure, GCP, Python, Hadoop, Workflow Orchestration
Description

GSPANN is looking for Big Data Engineers to join our growing family. We offer a broad range of opportunities for every stage of your career.

Location: Gurugram / Hyderabad / Pune / Anywhere in India
Role Type: Full Time
Published On: 13 January 2021
Experience: 2+ Years
Description
GSPANN is looking for Big Data Engineers to join our growing family. We offer a broad range of opportunities for every stage of your career.
Role and Responsibilities
  • Actively participate in all phases of the software development life cycle, including requirement gathering, functional and technical design, development, testing, roll-out, and support.
  • Solve complex business problems by utilizing a disciplined development methodology.
  • Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies.
  • Analyze the source and target system data. Map the transformation that meets the requirement.
  • Interact with the client and onsite coordinators during different phases of a project.
Skills and Experience
  • 3+ years of experience in software development (and design) using Big Data technologies.
  • Prior experience in developing relevant big data/ Extract, Transform, Load (ETL) data warehouse, and cloud-native data pipelines.
  • Expertise in Spark, Scala/Python/Java.
  • Strong experience in Procedural Language extension of SQL (PL/SQL).
  • Good understanding of working with REST and SOAP-based APIs to extract data for data pipelines.
  • Thorough knowledge of Hadoop and related processing frameworks, such as Spark, Hive, Sqoop, etc.
  • Good understanding of performance and application testing and scheduling tools.
  • Prior work experience in a public cloud environment, particularly Amazon Web Services (AWS).
  • Implement solutions using AWS Virtual Private Cloud, Elastic Compute Cloud (EC2), Data Pipeline, Cloud Formation, Auto Scaling, Simple Storage Service (Amazon S3), Elastic MapReduce (EMR), and other AWS products like Hive, Athena.
  • Hands-on experience in working with real-time data streams and Kafka platform.
  • Thorough knowledge of workflow orchestration tools like Apache Airflow design and deploy Directed Acrylic Graph (DAGs).
  • Hands-on experience in performance and scalability tuning.
  • Prior experience in Agile/Scrum application development using Jira.

Key Details

Location: Gurugram / Hyderabad / Pune / Anywhere in India
Role Type: Full Time
Published On: 13 January 2021
Experience: 2+ Years

Apply Now