Big Data Engineers and Leads

Big Data, Spark, PySpark, Java/Python/Scala, AWS/GCP/Azure
Description

GSPANN is looking for Big Data Engineers and Leads. Our culture fosters individual initiative and excellence. Join our global workforce to unleash a pool of opportunities.

Who We Are

GSPANN has been in business for over a decade, with over 1800 employees worldwide, and servicing some of the largest retail, high technology, and manufacturing clients in North America. We provide an environment that enables career growth while still interacting with company leadership.

Visit Why GSPANN for more information.

Location: Hyderabad / Gurugram / Pune
Role Type: Full Time
Published On: 6 October 2022
Experience: 2 - 8 Years
Description
GSPANN is looking for Big Data Engineers and Leads. Our culture fosters individual initiative and excellence. Join our global workforce to unleash a pool of opportunities.
Role and Responsibilities
  • Participate in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing, roll-out, and support. 
  • Solve complex business problems by utilizing a disciplined development methodology. 
  • Produce scalable, flexible, efficient, and supportable solutions using appropriate technologies. 
  • Analyze source and target system data. Map the transformation that meets the requirement. 
  • Interact with clients and onsite teams during different phases of a project. 
  • Coordinate and collaborate with business stakeholders, architects, and other teams. 
  • Conduct performance and scalability tuning.
Skills and Experience
  • Experience in developing relevant Big Data/ETL data warehouse and building cloud-native data pipelines. 
  • Prior experience in developing Agile/Scrum applications using Jira. 
  • Sound knowledge of Hive, Spark, Scala/Java/Python, and SQL. 
  • Prior experience in Object and Functional programming using Python. 
  • Good understanding of REST and SOAP-based APIs to extract data for data pipelines. 
  • Expertise in Hadoop and related processing frameworks, such as Spark, Hive, Sqoop, etc. 
  • Prior experience in working in a public cloud environment, i.e., GCP, AWS, or Azure. 
  • Hands-on experience in working with real-time data streams and Kafka platform. 
  • Good knowledge of workflow orchestration tools, such as Apache Airflow design and deploy Directed Acyclic Graphs (DAGs).

Key Details

Location: Hyderabad / Gurugram / Pune
Role Type: Full Time
Published On: 6 October 2022
Experience: 2 - 8 Years

Apply Now