Data Engineer in City of London

Data Engineer in City of London

City of London Full-Time 36000 - 60000 £ / year (est.) No home office possible
Go Premium
C

At a Glance

  • Tasks: Design and build scalable data pipelines for advanced analytics and AI applications.
  • Company: Join a growing team at a leading alternative investment firm with $65B in assets.
  • Benefits: Competitive salary, flexible work environment, and opportunities for professional growth.
  • Why this job: Make a real impact by transforming data into valuable insights for investment decisions.
  • Qualifications: Experience in Python, Azure, and data engineering principles.
  • Other info: Collaborative culture that encourages creativity and innovation in a fast-paced environment.

The predicted salary is between 36000 - 60000 £ per year.

We are looking to expand our Data Engineering team to build modern, scalable data platforms for our internal investment desks and portfolio companies. You will contribute to the firm’s objectives by delivering rapid and reliable data solutions that unlock value for Cerberus desks, portfolio companies, and other businesses. You’ll do this by designing and implementing robust data architectures, pipelines, and workflows that enable advanced analytics and AI applications. You may also support initiatives such as due diligence and pricing analyses by ensuring high-quality, timely data availability.

What you will do:

  • Design, build, and maintain scalable, cloud-based data pipelines and architectures to support advanced analytics and machine learning initiatives.
  • Develop robust ELT workflows using tools like dbt, Airflow, and SQL (PostgreSQL, MySQL) to transform raw data into high-quality, analytics-ready datasets.
  • Collaborate with data scientists, analysts, and software engineers to ensure seamless data integration and availability for predictive modeling and business intelligence.
  • Optimize data storage and processing in Azure environments for performance, reliability, and cost-efficiency.
  • Implement best practices for data modeling, governance, and security across all platforms.
  • Troubleshoot and enhance existing pipelines to improve scalability and resilience.

Sample Projects You Work On:

  • Financial Asset Management Pipeline: Build and manage data ingestion from third-party APIs, model data using dbt, and support machine learning workflows for asset pricing and prediction using Azure ML Studio. This includes ELT processes, data modeling, running predictions, and storing outputs for downstream analytics.

Your Experience:

We’re a small, high-impact team with a broad remit and diverse technical backgrounds. We don’t expect any single candidate to check every box below - if your experience overlaps strongly with what we do and you’re excited to apply your skills in a fast-moving, real-world environment, we’d love to hear from you.

  • Strong technical foundation: Degree in a STEM field (or equivalent experience) with hands-on experience in production environments, emphasizing performance optimization and code quality.
  • Python expertise: Advanced proficiency in Python for data engineering, data wrangling and pipeline development.
  • Cloud Platforms: Hands-on experience working with Azure. AWS experience is considered, however Azure exposure is essential.
  • Data Warehousing: Proven expertise with Snowflake – schema design, performance tuning, data ingestion, and security.
  • Workflow Orchestration: Production experience with Apache Airflow (Prefect, Dagster or similar), including authoring DAGs, scheduling workloads and monitoring pipeline execution.
  • Data Modeling: Strong skills in dbt, including writing modular SQL transformations, building data models, and maintaining dbt projects.
  • SQL Databases: Extensive experience with PostgreSQL, MySQL (or similar), including schema design, optimization, and complex query development.
  • Infrastructure as Code: Production experience with declarative infrastructure definition – e.g. Terraform, Pulumi or similar.
  • Version Control and CI/CD: Familiarity with Git-based workflows and continuous integration/deployment practices (experience with Azure DevOps or Github Actions) to ensure seamless code integration and deployment processes.
  • Communication and Problem solving skills: Ability to articulate complex technical concepts to technical and non-technical stakeholders alike. Excellent problem-solving skills with a strong analytical mindset.

About Us: We are a new, but growing team of AI specialists - data scientists, software engineers, and technology strategists - working to transform how an alternative investment firm with $65B in assets under management leverages technology and data. Our remit is broad, spanning investment operations, portfolio companies, and internal systems, giving the team the opportunity to shape the way the firm approaches analytics, automation, and decision-making.

We operate with the creativity and agility of a small team, tackling diverse, high-impact challenges across the firm. While we are embedded within a global investment platform, we maintain a collaborative, innovative culture where our AI talent can experiment, learn, and have real influence on business outcomes.

Data Engineer in City of London employer: Cerberus Capital Management

At Cerberus, we pride ourselves on being an excellent employer that fosters a collaborative and innovative culture, particularly within our expanding Data Engineering team. Our employees enjoy the opportunity to work on high-impact projects in a dynamic environment, with access to cutting-edge technology and resources that support their professional growth. With a focus on creativity and agility, we empower our team members to experiment and influence business outcomes while enjoying the benefits of working for a leading alternative investment firm.
C

Contact Detail:

Cerberus Capital Management Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer in City of London

✨Tip Number 1

Network like a pro! Reach out to current employees on LinkedIn or at industry events. A friendly chat can give you insider info and maybe even a referral, which is always a bonus!

✨Tip Number 2

Show off your skills in real-time! Consider working on a personal project or contributing to open-source. This not only sharpens your abilities but also gives you something tangible to discuss during interviews.

✨Tip Number 3

Prepare for those tricky technical interviews! Brush up on your Python, SQL, and data pipeline concepts. Practising with mock interviews can help you feel more confident when it’s your turn to shine.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are genuinely interested in joining our team!

We think you need these skills to ace Data Engineer in City of London

Data Engineering
Cloud-based Data Pipelines
ELT Workflows
dbt
Apache Airflow
Python
Azure
Data Warehousing
Snowflake
SQL (PostgreSQL, MySQL)
Data Modeling
Infrastructure as Code (Terraform, Pulumi)
Version Control (Git)
CI/CD Practices
Communication Skills

Some tips for your application 🫡

Tailor Your CV: Make sure your CV reflects the skills and experiences that align with the Data Engineer role. Highlight your technical expertise in Python, Azure, and data pipelines to catch our eye!

Craft a Compelling Cover Letter: Use your cover letter to tell us why you're excited about this opportunity at StudySmarter. Share specific examples of your past projects that relate to the job description, especially around data architectures and workflows.

Showcase Your Problem-Solving Skills: In your application, don’t just list your skills—demonstrate how you've used them to solve real-world problems. We love candidates who can articulate complex concepts clearly, so give us some examples!

Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team!

How to prepare for a job interview at Cerberus Capital Management

✨Know Your Tech Stack

Make sure you’re well-versed in the tools mentioned in the job description, like dbt, Airflow, and SQL. Brush up on your Python skills too, as they’ll likely ask you to demonstrate your coding abilities or solve a problem on the spot.

✨Showcase Your Projects

Prepare to discuss specific projects you've worked on that relate to data pipelines and architectures. Be ready to explain your role, the challenges you faced, and how you overcame them, especially in cloud environments like Azure.

✨Communicate Clearly

Practice explaining complex technical concepts in simple terms. You might be asked to communicate with non-technical stakeholders, so showing you can bridge that gap will impress the interviewers.

✨Ask Insightful Questions

Prepare thoughtful questions about the team’s current projects or challenges they face. This shows your genuine interest in the role and helps you gauge if the company culture aligns with your values.

Data Engineer in City of London
Cerberus Capital Management
Location: City of London
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

C
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>