Hi Connections!
We are Hiring! hashtag#Microsoft Azure, Digital: Python for Data Science, Digital: Databricks, Digital: PySpark, Azure Data Factory # Immediate Joiners!
🍁 Job Title: Developer
📍 Work Location: Mumbai, MH / Hyderabad, TG / Chennai, TN / Bhubaneswar, OR / Bangalore, KA
📌 Skill Required: Digital: Microsoft Azure, Digital: Python for Data Science, Digital: Databricks, Digital: PySpark, Azure Data Factory
💼 Experience Range in Required Skills: 6-8 Years
📧 Email- Lavanya.j@natobotics.com
Job Description:
1. Designing and implementing data ingestion pipelines from multiple sources using Azure Databricks.
2. Developing scalable and re-usable frameworks for ingesting data sets.
3. Integrating the end-to-end data pipeline - to take data from source systems to target data repositories ensuring the quality and consistency of data is maintained at all times.
4. Working with event based streaming technologies to ingest and process data.
5. Working with other members of the project team to support delivery of additional project components (API interfaces, Search).
6. Evaluating the performance and applicability of multiple tools against customer requirements
Key Responsibilities:
1. Develop and maintain ETLELT pipelines using Azure Data Factory (ADF) and Azure Databricks.
2. Implement data ingestion flows from diverse sources including Azure Blob Storage, Azure Data Lake, On-Prem SQL, and SFTP.
3. Design and optimize data models and transformations using Oracle, Spark SQL, PySpark, SQL Server, Progress DB SQL.
4. Build orchestration workflows in ADF using activities like Lookup, ForEach, Execute Pipeline, and Set Variable.
5. Perform root cause analysis and resolve production issues in pipelines and notebooks. Collaborate on CICD pipeline creation using Azure DevOps, Jenkins.
6. Apply performance tuning techniques to Azure Synapse Analytics and SQL DW.
7. Maintain documentation including runbooks, technical design specs, and QA test cases Data Pipeline Engineering Design and implement scalable, fault-tolerant data pipelines using Azure Synapse and Data bricks.
8. Ingest data from diverse sources including flat files, DB2, NoSQL, and cloud-native formats (CSV, JSON).
9. Technical Skills Required Cloud Platforms Azure (ADF, ADLS, ADB, Azure SQL, Synapse, Cosmos DB) ETL Tools Azure Data Factory, Azure Databricks Programming SQL, PySpark, Spark SQL DevOps Automation Azure DevOps, Git, CICD, Jenkins