Senior Data Engineer

Avacone Basel-Stadt, Switzerland
Apply Now

The OpportunityWe are supporting a major data platform transformation within a banking environment, moving from a legacy SQL Server and SSIS-based setup to a modern, scalable architecture built on dbt, Dagster, and OpenShift. This role is not about maintaining existing systems. It is about rebuilding a critical data platform from the ground up, with direct impact on risk, trading PnL, and core financial data flows. We are looking for a hands-on Senior Data Engineer who can take ownership of complex migration workstreams and deliver reliably in a regulated, high-stakes environment.  

What You Will DoYou will play a central role in the end-to-end migration and modernisation of the data platform. Platform Transformation • Translate legacy ETL logic from SSIS and stored procedures into modern ELT pipelines using dbt • Implement Data Vault 2.0 structures including Raw Vault and Business Vault • Build datamarts and curated datasets for downstream analytics and reporting

Orchestration & Infrastructure • Design and operate workflows using Dagster, including scheduling, dependencies, and recovery mechanisms • Deploy and run data workloads on OpenShift / Kubernetes environments

Event-Driven Data Processing • Enable near real-time data processing using Kafka-triggered pipelines • Integrate with upstream data lake environments and external data providers

Data Quality & Validation • Establish robust data validation and reconciliation processes • Implement automated testing and monitoring using dbt

Operational Ownership • Support production pipelines and resolve incidents when required • Create clear documentation and ensure operational readiness • Continuously improve performance, reliability, and maintainability

What You Will DoYou will play a central role in the end-to-end migration and modernisation of the data platform.

Platform Transformation • Translate legacy ETL logic from SSIS and stored procedures into modern ELT pipelines using dbt • Implement Data Vault 2.0 structures including Raw Vault and Business Vault • Build datamarts and curated datasets for downstream analytics and reporting

Orchestration & Infrastructure • Design and operate workflows using Dagster, including scheduling, dependencies, and recovery mechanisms • Deploy and run data workloads on OpenShift / Kubernetes environments

Event-Driven Data Processing • Enable near real-time data processing using Kafka-triggered pipelines • Integrate with upstream data lake environments and external data providers

Data Quality & Validation • Establish robust data validation and reconciliation processes • Implement automated testing and monitoring using dbt

Operational Ownership • Support production pipelines and resolve incidents when required • Create clear documentation and ensure operational readiness • Continuously improve performance, reliability, and maintainability