Role: Data Engineer
Location: Glasgow, UK
Work Mode: Hybrid (3 Days from Office)
Contract Role: 6 months
Experience: 10+ years
Start Date: Only Immediate Joiners or candidates with max 2-3 weeks’ notice
No Visa Sponsorship
Must have skill: AWS cloud ecosystem, Snowflake, Python, and Apache Spark, Banking Domain
Role Overview
We are seeking an experienced Data Engineer with strong expertise in AWS cloud ecosystem, Snowflake, Python, and Apache Spark, along with proven experience in the banking domain. The ideal candidate will be responsible for designing, developing, and optimizing scalable data pipelines and modern data platforms that support analytics, reporting, and regulatory requirements.
________________________________________
Key Responsibilities
Design, build, and maintain scalable data pipelines using AWS services and modern data engineering practices.
Develop and optimize ETL/ELT workflows using Python and Apache Spark.
Implement and manage Snowflake data warehouse solutions including data modeling, performance tuning, and optimization.
Work closely with business stakeholders, data analysts, and architects to understand banking data requirements.
Integrate data from multiple banking systems such as payments, transactions, customer, and risk platforms.
Ensure data quality, governance, security, and compliance aligned with banking regulations.
Develop data ingestion frameworks for structured and semi-structured data.
Optimize data processing performance and cost efficiency within AWS environments.
Support real-time and batch data processing solutions.
Document data architecture, data flows, and technical processes.
________________________________________
Required Skills & Qualifications
6+ years of experience in Data Engineering.
Strong hands-on experience with AWS services (S3, Glue, Lambda, Redshift, EMR, Athena, Step Functions, IAM).
Extensive experience with Snowflake including schema design and performance tuning.
Strong programming skills in Python.
Hands-on experience with Apache Spark / PySpark.
Experience building ETL/ELT pipelines and data integration frameworks.
Strong SQL and data modeling skills.
Experience working with large-scale datasets.