For this role, senior experience of Data Engineering and building automated data pipelines on IBM Datastage & DB2, AWS, and Databricks from source to operational databases through to curation layer is expected using the latest cloud modern technologies. Experience delivering complex pipelines will be significantly valuable in how D&G maintains and delivers world-class data pipelines.
Knowledge in the following areas is essential:
- Databricks: Expertise in managing and scaling Databricks environments for ETL, data science, and analytics use cases.
- AWS Cloud: Extensive experience with AWS services such as S3, Glue, Lambda, RDS, and IAM.
- IBM Skills: DB2, Datastage, Tivoli Workload Scheduler, Urban Code.
- Programming Languages: Proficiency in Python and SQL.
- Data Warehousing & ETL: Experience with modern ETL frameworks and data warehousing techniques.
- DevOps & CI/CD: Familiarity with DevOps practices for data engineering, including infrastructure-as-code (e.g., Terraform, CloudFormation), CI/CD pipelines, and monitoring (e.g., CloudWatch, Datadog).
- Familiarity with big data technologies like Apache Spark, Hadoop, or similar.
- ETL/ELT tools and creating common data sets across on-prem (IBM DataStage ETL) and cloud data stores.
- Leadership & Strategy: Lead Data Engineering teams in designing, developing, and maintaining highly scalable and performant data infrastructures.
- Customer Data Platform Development: Architect and manage data platforms using IBM (legacy platform) & Databricks on AWS technologies (e.g., S3, Lambda, Glacier, Glue, EventBridge, RDS) to support real-time and batch data processing.
- Data Governance & Best Practices: Implement best practices for data governance, security, and data quality across the platform. Ensure data is well-documented, accessible, and compliant with standards.
- Pipeline Automation & Optimization: Drive automation of data pipelines and workflows to improve efficiency and reliability.
- Team Management: Mentor and grow a team of data engineers, ensuring alignment with business goals, delivery timelines, and technical standards.
- Cross-Company Collaboration: Work closely with business stakeholders, including data scientists, finance analysts, MI, and cross-functional teams, to ensure seamless data access and integration.
- Cloud Management: Lead efforts to integrate and scale cloud data services on AWS, optimizing costs and ensuring platform resilience.
- Performance Monitoring: Establish monitoring and alerting solutions to ensure high performance and availability of data pipelines and systems, preventing impact on downstream consumers.
Please note that if you are NOT a passport holder of the country for the vacancy, you might need a work permit. Check our Blog for more information.
Bank or payment details should not be provided when applying for a job. Eurojobs.com is not responsible for external website content. All applications should be made via the \’Apply now\’ button.
Created on 05/05/2025 by TN, United Kingdom
#J-18808-Ljbffr
Contact Detail:
TN United Kingdom Recruiting Team