Job description :
• At least 5-8 years of experience in any of the ETL tools Prophecy, Datastage 11.5/11.7, Pentaho.. etc.
• At least 3 years of experience in Pyspark with GCP (Airflow, Dataproc, Big query) capable of configuring data pipelines.
• Strong Experience in writing complex SQL queries to perform data analysis on Databases SQL server, Oracle, HIVE etc.
• Possess the following technical skills – SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools)
• Ability to work independently on specialized assignments within the context of project deliverables
• Take ownership of providing solutions and tools that iteratively increase engineering efficiencies.
• Design should help embed standard processes, systems and operational models into the BAU approach for end-to-end execution of Data Pipelines
• Proven problem solving and analytical abilities including the ability to critically evaluate information gathered from multiple sources, reconcile conflicts, decompose high-level information into details and apply sound business and technical domain knowledge
• Communicate openly and honestly. Advanced oral, written and visual communication and presentation skills - the ability to communicate efficiently at a global level is paramount.
• Ability to deliver materials of the highest quality to management against tight deadlines.
• Ability to work effectively under pressure with competing and rapidly changing priorities.