Data Engineer

Data Engineer

Full-Time 50000 - 70000 £ / year (est.) No home office possible
Rimes Technologies

At a Glance

  • Tasks: Build scalable data pipelines and onboard datasets for analytics and operational workflows.
  • Company: Join Rimes, a leader in enterprise data management for the global investment community.
  • Benefits: Enjoy 28 days of leave, health benefits, and a flexible hybrid work environment.
  • Other info: Diversity and inclusion are at the heart of our culture, promoting equal opportunities.
  • Why this job: Make an impact by solving complex data problems with cutting-edge technology.
  • Qualifications: 1-3 years in data engineering, proficiency in Python, PySpark, and SQL.

The predicted salary is between 50000 - 70000 £ per year.

Rimes provides enterprise data management solutions to the global investment community. Driven by our passion for solving the most complex data problems, we provide our clients with investment intelligence that powers more than US$75 trillion in assets under management annually. The world’s leading institutional investors, asset managers and service providers rely on Rimes to help them make better investment decisions using accurate information and industry‑leading technology.

The Opportunity

We’re looking for a Data Engineer to own data onboarding and build scalable, reliable data pipelines that power analytics, operational workflows, and data‑driven decisions across Rimes. You’ll work closely with data producers, analysts, and product teams to ingest, transform, and operationalize data, primarily within Palantir Foundry (our core data platform) and complementary cloud compute. Note: Experience with Palantir Foundry is a strong plus but not required. If you bring solid data engineering fundamentals in Python/PySpark, SQL, and modern ELT patterns, we’ll support a fast ramp‑up on Foundry.

Responsibilities

  • Ingest & onboard datasets from internal systems, APIs, databases, files, external providers, and real‑time feeds.
  • Build and operate scalable ETL/ELT pipelines using Python, PySpark, SQL, and Foundry pipeline tooling; schedule and automate batch/stream refreshes.
  • Model and operationalize data (e.g., defining entities/relationships) to support analytics and operational applications in collaboration with domain experts.
  • Ensure trust in data through testing, data quality checks, observability/alerting, lineage, and compliant access controls.
  • Collaborate with analysts and product teams to translate business requirements into robust data solutions and clear data contracts/SLOs.

What Success Looks Like (First 3–6 Months)

  • You onboard and productionize new data sources with reliable refresh (scheduled or real‑time).
  • You deliver trusted, well‑documented datasets consumed by analytics and operational teams.
  • Key business entities are clearly modeled and discoverable.
  • Pipelines have meaningful monitoring and alerting, with reduced failures/re‑runs.
  • You contribute to standards/templates that speed up future onboarding.

Requirements

  • 1-3 years in data engineering or analytics engineering with end‑to‑end pipeline delivery in production.
  • Proficiency in Python & PySpark for distributed data processing.
  • Strong SQL for analytical and transformation logic.
  • Data modeling skills for both analytics and operational use cases.
  • Experience with data ingestion from APIs, databases, external feeds, and real‑time sources.
  • Solid grasp of data quality, testing, observability, lineage, and governance practices.
  • Comfort working with large datasets and distributed compute using modern ELT patterns.

Nice To Have

  • Palantir Foundry: pipelines/transforms, Code Repos, Ontology, and operational applications.
  • Spark execution concepts: partitions, shuffles, caching, and performance optimization.
  • Exposure to Databricks or cloud‑native compute with compute pushdown.
  • Experience with financial or enterprise operational data.
  • Experience with AI‑assisted ETL/ELT or data quality tooling.
  • Familiarity with streaming frameworks and/or orchestration tools.

What We Offer

  • 28 days of annual leave
  • Healthshield Cashback plan
  • MetLife Afterlife Support
  • MetalifeGP24 hour virtual GP service
  • Chubbs Travel Insurance

Compensation: Competitive pay and bonus eligibility

Work Life Balance: Flexible hybrid work environment

Rimes is committed to promote the values of diversity and inclusion throughout the business. Whether it’s through recruitment, retention, career progression or training and development, we are committed to improving opportunities for people regardless of their background or circumstances.

Data Engineer employer: Rimes Technologies

Rimes is an exceptional employer that champions a flexible hybrid work environment, allowing Data Engineers to thrive while balancing their professional and personal lives. With a strong commitment to diversity and inclusion, Rimes offers robust career development opportunities and a supportive culture that encourages innovation and collaboration, making it an ideal place for those looking to make a meaningful impact in the investment community.
Rimes Technologies

Contact Detail:

Rimes Technologies Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer

✨Tip Number 1

Network like a pro! Reach out to folks in the industry, especially those at Rimes or similar companies. A friendly chat can open doors and give you insights that job descriptions just can't.

✨Tip Number 2

Show off your skills! Create a portfolio showcasing your data engineering projects, especially if you've worked with Python, PySpark, or SQL. This is your chance to shine and demonstrate what you can bring to the table.

✨Tip Number 3

Prepare for the interview by brushing up on your technical knowledge and problem-solving skills. Be ready to discuss how you've tackled data challenges in the past and how you can contribute to Rimes' mission.

✨Tip Number 4

Don't forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, it shows you're genuinely interested in joining the Rimes team.

We think you need these skills to ace Data Engineer

Data Engineering
Python
PySpark
SQL
ETL/ELT Pipelines
Data Ingestion
Data Modeling
Data Quality
Testing
Observability
Lineage
Governance Practices
Collaboration
Cloud Compute
Palantir Foundry

Some tips for your application 🫡

Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Python, PySpark, and SQL, and don’t forget to mention any relevant projects or achievements that showcase your data engineering skills.

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how your skills align with our mission at Rimes. Keep it concise but impactful!

Showcase Your Problem-Solving Skills: In your application, give examples of how you've tackled complex data problems in the past. We love seeing how you approach challenges and what solutions you’ve implemented!

Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy!

How to prepare for a job interview at Rimes Technologies

✨Know Your Data Engineering Fundamentals

Make sure you brush up on your data engineering basics, especially in Python, PySpark, and SQL. Be ready to discuss how you've used these skills in past projects, as this will show your practical experience and understanding of the role.

✨Familiarise Yourself with ETL/ELT Processes

Since the job involves building scalable ETL/ELT pipelines, it’s crucial to understand these processes inside out. Prepare to explain how you’ve designed or optimised data pipelines before, and think about any challenges you faced and how you overcame them.

✨Showcase Your Collaboration Skills

Collaboration is key in this role, so be ready to share examples of how you've worked with analysts and product teams. Highlight any successful projects where you translated business requirements into data solutions, as this will demonstrate your ability to work cross-functionally.

✨Prepare for Technical Questions

Expect technical questions that test your knowledge of data quality, testing, and observability practices. Brush up on concepts like data lineage and governance, and be prepared to discuss how you ensure trust in the data you work with.

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>