A multinational Insurance company.
- Design, build, and maintain scalable and efficient ETL/ELT pipelines in Azure Databricks to process structured, semi-structured, and unstructured insurance data from multiple internal and external sources.
- Collaborate with data architects, modellers, analysts, and business stakeholders to gather data requirements and deliver fit-for-purpose data assets that support analytics, regulatory, and operational needs.
- Develop, test, and optimize data transformation routines, batch and streaming solutions (leveraging tools such as Azure Data Factory, Data Lake Storage Gen2, Azure Event Hubs, and Kafka) to ensure timely and accurate data delivery.
- Implement rigorous data quality, validation, and cleansing procedures-with a focus on enhancing reliability for high-stakes insurance use cases, reporting, and regulatory outputs.
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 8+ years of experience in data architecture and database design, preferably within the insurance sector (e.g., policy, claims, underwriting, actuarial, or regulatory reporting).
- Deep expertise designing data solutions on Azure, with extensive hands-on experience leveraging Azure Databricks for processing, transforming, and analyzing structured, semi-structured (e.g., JSON, XML), and unstructured (e.g., text, images) insurance data.
- Skilled in using Azure Data Factory for orchestrating complex ETL/ELT pipelines involving diverse data formats in large, heterogeneous insurance datasets.
12 month rolling contract
