Data Engineer in Slough

Data Engineer in Slough

Slough Full-Time 36000 - 60000 £ / year (est.) No home office possible
Go Premium
C

At a Glance

  • Tasks: Design and build scalable data pipelines for advanced analytics and AI applications.
  • Company: Join a growing team of AI specialists in a dynamic investment firm.
  • Benefits: Competitive salary, flexible work environment, and opportunities for professional growth.
  • Why this job: Make a real impact by transforming data into valuable insights for investment decisions.
  • Qualifications: Experience in Python, Azure, and data engineering tools like dbt and Airflow.
  • Other info: Collaborative culture with opportunities to shape analytics and decision-making.

The predicted salary is between 36000 - 60000 £ per year.

We are looking to expand our Data Engineering team to build modern, scalable data platforms for our internal investment desks and portfolio companies. You will contribute to the firm’s objectives by delivering rapid and reliable data solutions that unlock value for Cerberus desks, portfolio companies, and other businesses. You’ll do this by designing and implementing robust data architectures, pipelines, and workflows that enable advanced analytics and AI applications. You may also support initiatives such as due diligence and pricing analyses by ensuring high-quality, timely data availability.

What you will do:

  • Design, build, and maintain scalable, cloud-based data pipelines and architectures to support advanced analytics and machine learning initiatives.
  • Develop robust ELT workflows using tools like dbt, Airflow, and SQL (PostgreSQL, MySQL) to transform raw data into high-quality, analytics-ready datasets.
  • Collaborate with data scientists, analysts, and software engineers to ensure seamless data integration and availability for predictive modeling and business intelligence.
  • Optimize data storage and processing in Azure environments for performance, reliability, and cost-efficiency.
  • Implement best practices for data modeling, governance, and security across all platforms.
  • Troubleshoot and enhance existing pipelines to improve scalability and resilience.

Sample Projects You Work On:

  • Financial Asset Management Pipeline: Build and manage data ingestion from third-party APIs, model data using dbt, and support machine learning workflows for asset pricing and prediction using Azure ML Studio. This includes ELT processes, data modeling, running predictions, and storing outputs for downstream analytics.

Your Experience:

We’re a small, high-impact team with a broad remit and diverse technical backgrounds. We don’t expect any single candidate to check every box below - if your experience overlaps strongly with what we do and you’re excited to apply your skills in a fast-moving, real-world environment, we’d love to hear from you.

  • Strong technical foundation: Degree in a STEM field (or equivalent experience) with hands-on experience in production environments, emphasizing performance optimization and code quality.
  • Python expertise: Advanced proficiency in Python for data engineering, data wrangling and pipeline development.
  • Cloud Platforms: Hands-on experience working with Azure. AWS experience is considered, however Azure exposure is essential.
  • Data Warehousing: Proven expertise with Snowflake – schema design, performance tuning, data ingestion, and security.
  • Workflow Orchestration: Production experience with Apache Airflow (Prefect, Dagster or similar), including authoring DAGs, scheduling workloads and monitoring pipeline execution.
  • Data Modeling: Strong skills in dbt, including writing modular SQL transformations, building data models, and maintaining dbt projects.
  • SQL Databases: Extensive experience with PostgreSQL, MySQL (or similar), including schema design, optimization, and complex query development.
  • Infrastructure as Code: Production experience with declarative infrastructure definition – e.g. Terraform, Pulumi or similar.
  • Version Control and CI/CD: Familiarity with Git-based workflows and continuous integration/deployment practices (experience with Azure DevOps or Github Actions) to ensure seamless code integration and deployment processes.
  • Communication and Problem solving skills: Ability to articulate complex technical concepts to technical and non-technical stakeholders alike. Excellent problem-solving skills with a strong analytical mindset.

About Us: We are a new, but growing team of AI specialists - data scientists, software engineers, and technology strategists - working to transform how an alternative investment firm with $65B in assets under management leverages technology and data. Our remit is broad, spanning investment operations, portfolio companies, and internal systems, giving the team the opportunity to shape the way the firm approaches analytics, automation, and decision-making. We operate with the creativity and agility of a small team, tackling diverse, high-impact challenges across the firm. While we are embedded within a global investment platform, we maintain a collaborative, innovative culture where our AI talent can experiment, learn, and have real influence on business outcomes.

Data Engineer in Slough employer: Cerberus Capital Management

At Cerberus, we pride ourselves on being an excellent employer, offering a dynamic work environment where innovation and collaboration thrive. Our Data Engineering team is at the forefront of transforming data into actionable insights, providing employees with unique opportunities for professional growth and development in a fast-paced, high-impact setting. With a focus on creativity and agility, we empower our team members to experiment and influence key business outcomes while enjoying the benefits of working within a supportive and inclusive culture.
C

Contact Detail:

Cerberus Capital Management Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer in Slough

✨Network Like a Pro

Get out there and connect with folks in the industry! Attend meetups, webinars, or even just grab a coffee with someone who’s already in the data engineering game. We can’t stress enough how personal connections can lead to job opportunities.

✨Show Off Your Skills

Don’t just tell us what you can do; show us! Create a portfolio of your projects, especially those involving data pipelines and cloud platforms. We love seeing real-world applications of your skills, so make sure to highlight your best work!

✨Ace the Interview

Prepare for technical interviews by brushing up on your Python, SQL, and cloud knowledge. We want to see how you think through problems, so practice explaining your thought process clearly. Mock interviews can be a great way to build confidence!

✨Apply Through Our Website

We encourage you to apply directly through our website. It’s the best way for us to see your application and get you in front of the right people. Plus, it shows you’re genuinely interested in joining our team!

We think you need these skills to ace Data Engineer in Slough

Data Engineering
Cloud-based Data Pipelines
ELT Workflows
dbt
Apache Airflow
SQL (PostgreSQL, MySQL)
Azure
Data Warehousing (Snowflake)
Data Modeling
Infrastructure as Code (Terraform, Pulumi)
Version Control (Git)
CI/CD Practices
Communication Skills
Problem-Solving Skills
Analytical Mindset

Some tips for your application 🫡

Tailor Your CV: Make sure your CV reflects the skills and experiences that align with the Data Engineer role. Highlight your technical expertise in Python, Azure, and data warehousing to catch our eye!

Craft a Compelling Cover Letter: Use your cover letter to tell us why you're excited about this position and how your background fits into our team. Share specific examples of your past projects that relate to building scalable data pipelines.

Show Off Your Projects: If you've worked on any relevant projects, especially those involving ELT workflows or cloud platforms, make sure to mention them! We love seeing practical applications of your skills.

Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy!

How to prepare for a job interview at Cerberus Capital Management

✨Know Your Tech Stack

Make sure you’re well-versed in the tools mentioned in the job description, like dbt, Airflow, and Azure. Brush up on your SQL skills too, especially with PostgreSQL and MySQL, as you'll likely be asked to demonstrate your knowledge during the interview.

✨Showcase Your Projects

Prepare to discuss specific projects you've worked on that relate to data engineering. Highlight your experience with building data pipelines, optimising performance, and any challenges you overcame. This will show your practical skills and problem-solving abilities.

✨Communicate Clearly

Practice explaining complex technical concepts in simple terms. You’ll need to articulate your ideas to both technical and non-technical stakeholders, so being able to communicate effectively is key. Consider doing mock interviews with friends or colleagues to refine this skill.

✨Ask Insightful Questions

Prepare thoughtful questions about the team’s current projects, challenges they face, and how they measure success. This shows your genuine interest in the role and helps you assess if the company culture aligns with your values.

Data Engineer in Slough
Cerberus Capital Management
Location: Slough
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

C
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>