Technical Delivery Manager – Information Analytics

Python, PySpark, Scala, Java, SQL
Description

We are looking for a Technical Delivery Manager – Information Analytics who is determined to solve the most challenging problems of the organization. Join our global workforce and discover a pool of opportunities.

Location: Hyderabad
Role Type: Full Time
Published On: 10 November 2020
Experience: 10+ Years
Description
We are looking for a Technical Delivery Manager – Information Analytics who is determined to solve the most challenging problems of the organization. Join our global workforce and discover a pool of opportunities.
Role and Responsibilities
  • Work with a team of full-time and consultant developers and QA engineers to develop a set of systems to support the program, such as data transfer services, backend software infrastructure, reporting and analytics engine.
  • Own the software system development process from ideation through long-term sustaining ownership. Be responsible for keeping the applications current with periodic releases.
  • Drive development across multiple teams to create and maintain a holistic set of systems.
  • Work and lead in an agile perspective for all software development methods and commit to a fast-release schedule.
  • Conduct regular code reviews to keep the codebase in compliance with rigorous code review standards.
  • Implement test-driven development methodologies.
  • Conduct design reviews and assure that all modules fit the proposed architecture.
  • Effectively lead and delegate tasks with responsibilities to small-to-medium-sized teams globally.
  • Commit to long-term sustaining application ownership.
  • Participate in corrective actions to ensure closure or resolution.
Skills and Experience
  • Strong functional knowledge of working in a retail Inventory or supply chain domain.
  • Prior experience in communicating with business stakeholders and architects.
  • Good understanding of developing relevant Big Data or ETL data warehouse experience in building cloud-native data pipelines.
  • Expertise in Python, PySpark, Scala, Java, and SQL Strong Object and Functional Programming experience in Python.
  • Hands-on experience with REST and SOAP-based APIs to extract data for data pipelines.
  • Extensive experience working with Hadoop and related processing frameworks, such as Spark, Hive, Sqoop, etc.
  • Should have worked in a public cloud environment, Amazon Web Services (AWS) is mandatory.
  • Thorough understanding of implementing solutions with AWS Virtual Private Cloud, EC2, AWS Data Pipeline, AWS Cloud Formation, Auto Scaling, AWS Simple Storage Service, EMR, and other AWS products like Hive and Athena.
  • Expertise in working with real-time data streams and Kafka platform.
  • Prior knowledge of workflow orchestration tools, like Apache Airflow design and deploy Directed Acyclic Graphs (DAGs).
  • Hands-on experience of performance and scalability tuning.
  • Good understanding of agile or Scrum application development using Jira.
  • Strong analytical, problem-solving, and troubleshooting skills.

Key Details

Location: Hyderabad
Role Type: Full Time
Published On: 10 November 2020
Experience: 10+ Years

Apply Now