At a Glance
- Tasks: Lead a team of Data Engineers and drive innovative data solutions.
- Company: Join a fast-growing RegTech SaaS provider shaping the future of compliance.
- Benefits: Enjoy private medical insurance, life insurance, flexible hours, and career progression.
- Other info: Inclusive culture with opportunities for personal and professional growth.
- Why this job: Make an impact in financial compliance while working with cutting-edge technologies.
- Qualifications: Experience in leading teams and building data pipelines with Python and PySpark.
The predicted salary is between 70000 - 90000 £ per year.
Novatus Global is a Series B scale-up RegTech SaaS provider and boutique advisory firm, helping financial institutions manage their most complex regulatory requirements. Our flagship SaaS platform, En:ACT, is a market-leading solution for regulatory transaction reporting and reconciliation across global regimes. En:ACT automates reporting, reconciles data across systems, and maps errors directly to regulatory rules, helping firms remediate quickly, reduce risk, and meet regulatory obligations with confidence.
Alongside our SaaS offering, our unique model delivers consulting services across Risk & Compliance, ESG, Strategy, Data, and Operations. We are shaping the future of regulatory compliance through innovation in both advisory and technology.
As a Data Engineering Lead, you will manage a team of Data Engineers whilst remaining hands-on with delivery and architecture. You'll be providing technical leadership and mentorship to your team, and writing clean, maintainable, and well-tested code.
You will be accountable for our configuration-driven data platform in Databricks, enabling non-engineers to define regulatory logic, and our Snowflake data warehouse, ensuring scalability, auditability, and fitness for client-facing regulatory use cases. You will set technical direction, drive standards, and ensure high-quality execution across the team.
You’ll join at a pivotal stage as we modernize our data infrastructure, migrating from Python scripts and MySQL to Databricks and Snowflake. Our systems power regulatory reporting for major financial institutions, requiring precision, traceability, and reliability.
- Own the architecture, roadmap and delivery for our configuration-driven data framework in Databricks and our Snowflake warehouse.
- Manage and mentor a team of Data Engineers in a player/coach capacity.
- Design, build, and optimize data pipelines using Databricks, Kafka, Python and PySpark.
- Evolve a configuration-driven platform enabling non-engineers to define regulatory logic.
- Implement robust data quality controls including testing, validation, monitoring, and alerting.
- Drive performance optimization across Spark and Snowflake workloads.
- Partner with Product, Engineering, DevOps, and Regulatory teams to translate requirements into scalable technical designs.
- Improve engineering standards, processes, and tooling across the data function.
Experience required:
- Leading an Engineering team.
- Building data pipelines using Python and PySpark.
- Designing auditable, reproducible data pipelines in regulated or high-integrity environments.
- Writing and optimizing complex SQL queries on large data sets.
- Strong data modeling and warehouse design fundamentals.
- Strong software engineering fundamentals (clean code, automated testing, CI/CD, observability).
- Experience with modern cloud data platforms and orchestration tools.
- Ability to translate complex regulatory requirements into technical specifications.
- Hands-on experience with AWS cloud infrastructure.
- Building new data platforms or modernizing legacy systems.
- FinTech, RegTech, or financial services background.
We offer:
- Private Medical Insurance (AXA) – includes mental health, dental, vision, and private GP access.
- Life Insurance (4× salary) with Unum.
- Unum’s Help@Hand: includes medical second opinions, physiotherapy, lifestyle coaching, savings and discounts, and cancer support services for you, your partner, and your child(ren).
- Employee Assistance Program.
- Enhanced parental leave (maternity & paternity).
- Accelerated career progression based on performance, not tenure.
- Holiday entitlement increases with tenure.
- Flexible hours with core collaboration time.
- Paid volunteering leave.
- Gym & fitness discounts.
- Quarterly socials, and office snacks & drinks.
All employment decisions are made based on business needs, role requirements, and individual qualifications, without regard to race, age, religion or belief, sex, sexual orientation, gender identity or expression, marital or civil partnership status, pregnancy or maternity, socioeconomic background, disability, or any other characteristic protected under the Equality Act 2010. We maintain a workplace culture that is inclusive, respectful, and supportive.
Data Science Team Lead & Data Engineering Team Lead in City of London employer: Novatus
Contact Detail:
Novatus Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Science Team Lead & Data Engineering Team Lead in City of London
✨Tip Number 1
Network like a pro! Reach out to your connections in the industry, attend meetups, and engage with professionals on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Prepare for interviews by researching the company and its products, especially En:ACT. Understand their challenges and think about how your skills can help solve them. This shows you're genuinely interested and ready to contribute.
✨Tip Number 3
Practice your technical skills! Brush up on your Python, SQL, and data pipeline knowledge. Be ready to showcase your expertise during technical interviews, as they’ll want to see you in action.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re keen on joining our innovative team at Novatus Global.
We think you need these skills to ace Data Science Team Lead & Data Engineering Team Lead in City of London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the role you're applying for. Highlight your experience with data engineering, Python, and any relevant projects that showcase your skills. We want to see how you can contribute to our team!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about the role and how your background aligns with our mission at Novatus Global. Keep it concise but impactful.
Showcase Your Technical Skills: Don’t forget to highlight your technical expertise in your application. Mention your experience with Databricks, Snowflake, and any other tools you've used. We love seeing candidates who are hands-on and ready to dive into our tech stack!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets to us quickly and efficiently. Plus, it shows you're keen on joining our journey at Novatus Global!
How to prepare for a job interview at Novatus
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, like Databricks, Snowflake, Python, and PySpark. Brush up on your knowledge of data pipelines and be ready to discuss how you've used these tools in past projects.
✨Showcase Your Leadership Skills
As a Data Engineering Lead, you'll need to demonstrate your ability to manage and mentor a team. Prepare examples of how you've successfully led teams in the past, focusing on your approach to technical leadership and fostering collaboration.
✨Understand Regulatory Requirements
Since the role involves working with regulatory compliance, it’s crucial to understand the basics of regulatory requirements in the financial sector. Be prepared to discuss how you would translate complex regulations into technical specifications for your team.
✨Prepare for Problem-Solving Questions
Expect to face technical challenges during the interview. Practice solving problems related to data architecture and pipeline optimisation. Think about how you would approach performance issues in Spark or Snowflake and be ready to share your thought process.