GSPANN is looking for Hadoop Developers/Lead. As we march ahead on a tremendous growth trajectory, we seek a passionate and talented professional to join our growing family.
Experience Required: 1 - 8 Years
Job Type: Permanent
Technical Skill Requirements: Hadoop Ecosystem, Java, Scala, Azure, Python, and MongoDB
Roles & Responsibilities:
Design, develop, and maintain high-volume Scala-based data processing batch jobs, using industry standard tools and frameworks in Hadoop ecosystem (Spark, Kafka, Scalding, Cascading, Hive, Impala, Avro, Flume, Oozie, and Sqoop).
Design and maintain schemas in the Hadoop/Vertica analytics database and write efficient SQL for loading and querying analytics data.
Integrate data processing jobs and services with applications like Coremetrics, Twitter, etc., using technologies like Flume, Kafka, RabbitMQ, Spring, MongoDB, Elasticsearch, Coherence, MySQL, etc.
Write appropriate unit, integration, and load tests using industry standard frameworks, such as Specs2, ScalaTest, ScalaCheck, JMeter, JUnit, Cucumber, and Grinder.
Live by Agile (particularly Scrum) principles and collaborate with the team members using Agile techniques, including pair programming, test-driven development, code reviews, and retrospectives.
Maintain ‘ideas to implementation’ innovation strategy by exploring new technologies, languages, and techniques in the rapidly evolving world of high-volume data processing.
Python and MongoDB