Job Description
Details
1 Role -Senior Developer
2 Required Technical Skill Set - Spark/Scala/Unix
3 Desired Experience Range -5-8 years
4 Location of Requirement - Pune
Desired Competencies (Technical/Behavioral Competency)
Must-Have**
(Ideally should not be more than 3-5)
Minimum 4+ years of experience in development of Spark Scala
Experience in designing and development of solutions for Big Data using Hadoop ecosystem technologies such as with Hadoop Bigdata components like HDFS, Spark, Hive Parquet File format, YARN, Map Reduce, Sqoop
Good Experience in writing and optimizing Spark Jobs, Spark SQL etc. Should have worked on both batch and streaming data processing.
Experience in writing and optimizing complex Hive and SQL queries to process huge data. good with UDFs, tables, joins, Views etc
Experience in debugging the Spark code
Working knowledge of basic UNIX commands and shell script
Experience of Autosys, Gradle
Good-to-Have
Good analytical and debugging skills
Ability to coordinate with SMEs, stakeholders, manage timelines, escalation & provide on time status
Write clear and precise documentation / specification
Work in an agile environment
Create documentation and document all developed mappings
SN
Responsibility of / Expectations from the Role
1 Create Scala/Spark jobs for data transformation and aggregation
2 Produce unit tests for Spark transformations and helper methods
3 Write Scaladoc-style documentation with all code
4 Design data processing pipelines