At a Glance
- Tasks: Lead the design and build of advanced data platforms for tackling fraud in finance.
- Company: Join a high-performing team at a leading financial tech firm.
- Benefits: Competitive day rate, flexible work environment, and opportunity for professional growth.
- Other info: Hands-on leadership role with excellent mentorship opportunities.
- Why this job: Make a real impact by delivering scalable data solutions using cutting-edge technologies.
- Qualifications: Strong experience in data engineering, Databricks, and cloud architectures.
The predicted salary is between 42000 - 55000 £ per year.
Location: London (2 days onsite per week)
Day Rate: £525-550 Per day (Outside IR35)
Duration: min. 6-month contract
Start: ASAP
Overview
We are looking for a Lead Data Engineer to join a high-performing team delivering advanced data platforms that support financial institutions in tackling fraud and financial crime. In this role, you will help design and evolve a modern Databricks + AWS lakehouse architecture, enabling analytics, machine learning, and investigative teams to generate actionable insights from large-scale datasets. This is a hands-on leadership position focused on building robust, scalable, and governed data solutions using modern cloud technologies.
The Role
- Own the end-to-end design, build, optimisation, and support of scalable Spark / PySpark data pipelines (batch and streaming)
- Define and implement lakehouse architecture standards (medallion model: bronze, silver, gold), including governance, lineage, and data quality controls
- Design and manage secure data ingestion frameworks (e.g. Apache NiFi, APIs, SFTP/FTPS) for internal and external data sources
- Architect and maintain secure AWS-based data infrastructure (S3, IAM, KMS, Glue, Lake Formation, Lambda, Step Functions, CloudWatch, etc.)
- Implement orchestration using tools such as Airflow, Databricks Workflows, and Step Functions
- Champion data quality, observability, and reliability (SLAs, monitoring, alerting, reconciliation)
- Drive CI/CD best practices for data platforms (infrastructure as code, automated testing, versioning, environment promotion)
- Mentor engineers on distributed data processing, performance optimisation, and cost efficiency
- Collaborate with data science, product, and compliance teams to translate requirements into scalable data solutions
Required Skills & Experience
- Strong experience as a Senior or Lead Data Engineer with ownership of end-to-end data solutions
- Expertise in Databricks, PySpark / Spark, SQL, and Python
- Proven experience building and optimising large-scale data pipelines in production environments
- Strong knowledge of cloud data architectures, particularly within AWS
- Experience designing scalable data models and reusable frameworks
- Hands-on experience with orchestration tools such as Airflow or similar
- Solid understanding of data governance, lineage, and compliance requirements
- Experience with CI/CD pipelines and infrastructure as code (e.g. Terraform, CloudFormation)
- Strong communication skills with the ability to collaborate across technical and non-technical teams
What We’re Looking For
- A hands-on technical leader who can design, build, and deliver solutions independently
- Someone comfortable working with high-volume, high-throughput data systems
- Strong problem-solving skills and a pragmatic, delivery-focused mindset
- Experience mentoring engineers and setting engineering standards and best practices
- Ability to balance technical excellence with delivery timelines
Lead Data Science Engineer employer: Forsyth Barnes
Contact Detail:
Forsyth Barnes Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Lead Data Science Engineer
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who work with AWS and Databricks. A friendly chat can lead to insider info about job openings that aren't even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your best data pipelines and projects. When you get an interview, you can walk them through your work, demonstrating your hands-on experience and problem-solving abilities.
✨Tip Number 3
Prepare for technical interviews by brushing up on your Spark and PySpark knowledge. Practice coding challenges related to data processing and architecture design. We want you to feel confident when discussing your expertise!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets noticed. Plus, we love seeing candidates who are proactive and engaged with our platform.
We think you need these skills to ace Lead Data Science Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Lead Data Engineer role. Highlight your experience with Databricks, AWS, and data pipelines. We want to see how your skills match what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're the perfect fit for our team. Share specific examples of your past work that align with the job description.
Showcase Your Technical Skills: Don’t hold back on showcasing your technical expertise! Mention your hands-on experience with Spark, PySpark, and orchestration tools like Airflow. We love seeing real-world applications of your skills.
Apply Through Our Website: We encourage you to apply through our website for a smoother process. It helps us keep track of your application and ensures you don’t miss any important updates from us!
How to prepare for a job interview at Forsyth Barnes
✨Know Your Tech Inside Out
Make sure you’re well-versed in Databricks, PySpark, and AWS. Brush up on your knowledge of data pipelines and lakehouse architecture. Be ready to discuss specific projects where you've implemented these technologies.
✨Showcase Your Leadership Skills
As a Lead Data Engineer, you'll need to demonstrate your ability to mentor and guide others. Prepare examples of how you've led teams or projects, focusing on your approach to problem-solving and setting engineering standards.
✨Prepare for Scenario-Based Questions
Expect questions that assess your practical experience with data governance, compliance, and CI/CD practices. Think of real-world scenarios where you’ve had to tackle challenges related to data quality or pipeline optimisation.
✨Communicate Clearly and Confidently
Strong communication is key, especially when collaborating with non-technical teams. Practice explaining complex concepts in simple terms, and be prepared to discuss how you’ve successfully worked across different departments.