Location: Krakow, Poland (HYBRID)
Experience: Minimum 10 year of Expeience in data
Skills: Azure Databricks (Apache Spark), SSRS, Azure SQL PaaS , Knowledge on Azure and Azure DevOps. One of the programming language preferably Python
Requirement:
- Business Information Glossaries – Azure Data Catalogue, Business Objects
- Cloud Computing Services –MS Azure
- Distributed Systems –Databricks, Spark,
- ETL Tools – Alteryx, Azure Data Factory, Power BI Dataflows,
- Graph Databases –Azure Cosmos,
- NoSQL Document Stores – Apache, Elastic Search, MongoDB, SharePoint
- Programming Languages – .Net, C#, C++, CSS, HTML5, Java, Node.js, PowerShell, Python
- Relational SMP Databases – Azure SQL PaaS, MySQL or SQL Server
Responsibilities:
- Analyzing and translating business needs into long-term solution data models.
- Evaluating existing data systems.
- Working with the development team to create conceptual data models and data flows.
- Developing best practices for data coding to ensure consistency within the system.
- Reviewing modifications of existing systems for cross-compatibility.
- Implementing data strategies and developing physical data models.
- Updating and optimizing local and metadata models.
- Evaluating implemented data systems for variances, discrepancies, and efficiency.
- Troubleshooting and optimizing data systems.
- Design logical and physical data-models normalizing unstructured mainframe data into OLTP schema.
- Create SAS coding and conversion standards.
- Document all source-to-target data mapping for the EDW.
- Involve in the data model changes in XML format.
- Generate XML structures as a part of the JSON design.
Job Types: Full-time, Contract, Permanent
Contract length: 6 months
This job has now closed
You can find more jobs over on our careers page.
See More Jobs