Senior Data Engineer | London, UK | Remote
Senior Data Engineer | London, UK | Remote

Senior Data Engineer | London, UK | Remote

London Full-Time 54000 - 84000 £ / year (est.) No home office possible
H

At a Glance

  • Tasks: Design and optimise data pipelines for trading and research operations.
  • Company: Join a rapidly growing hedge fund with impressive returns and a dynamic team.
  • Benefits: Enjoy remote work flexibility and a culture of ownership and innovation.
  • Why this job: Be at the forefront of data engineering, driving impactful solutions in finance.
  • Qualifications: Strong SQL skills, experience with ETL tools, and proficiency in Python or Java/Scala required.
  • Other info: Collaborate with top professionals and contribute to cutting-edge data infrastructure.

The predicted salary is between 54000 - 84000 £ per year.

We are looking for a Senior Data Engineer to help us architect, implement and operate the complete data infrastructure pipeline for our Research and Trading operations. This role will be crucial in building a scalable, reliable, and cost-efficient system for handling vast amounts of market trading data, real-time news feeds and a variety of internal and external data sources. The ideal candidate will be a hands-on professional who understands the entire data lifecycle and can drive innovation while collaborating across research and engineering teams to meet their needs.

Responsibilities

  • Design, build, and optimize scalable pipelines for ingesting, transforming, and integrating large-volume datasets (market data, news feeds and various unstructured data sources).
  • Ensure data quality, consistency, and real-time monitoring using tools like DBT, 3rd party libraries that can facilitate data validation processes.
  • Develop processes to normalize and organize our data warehouse for use across different departments.
  • Apply advanced data management practices to ensure the scalability, availability, and efficiency of data storage.
  • Ensure the infrastructure supports trading and research needs while maintaining data integrity, security, and performance at scale.
  • Collaborate with research and analytics teams to understand their data needs and build frameworks that empower data exploration, analysis, and model development.
  • Create tools for overlaying data from multiple sources.
  • Ensure that data storage, processing, and management are done in a cost-effective manner, optimizing both hardware and software resources.
  • Implement solutions that balance high performance with cost control.
  • Stay ahead of the curve by continuously evaluating and adopting the most suitable technologies for the organization’s data engineering needs.
  • Ensure that company’s systems align with the latest best practices in data management.

Requirements

Must Have

  • Strong problem-solving and analytical thinking.
  • Clear communication skills for cross-functional collaboration.
  • Proficiency in building robust data quality checks for ingested data.
  • Experience identifying anomalies in ingested data.
  • Strong proficiency in writing complex SQL (and similar) queries and optimize performance.
  • Proficiency in Python or Java/Scala.
  • Experience building and maintaining complex ETL pipelines with tools like Apache Airflow, dbt, or custom scripts.
  • Strong understanding of dimensional modeling, star/snowflake schemas, normalization/denormalization principles.
  • Proven experience with platforms like Snowflake, Redshift, BigQuery, Synapse.
  • Expert knowledge of Apache Spark, Kafka, Flink, or similar.
  • Strong understanding of data security and privacy standards.

Good to Have

  • A degree in Computer Science, Engineering, Mathematics, or a related field.
  • Familiarity with one of the major cloud platforms (AWS, GCP, Azure) and their data services (e.g., BigQuery, Redshift, S3, Dataflow, etc.), proven by certifications (e.g., Google Professional Data Engineer, AWS Big Data Specialty or Snowflake’s SnowPro Data Engineer).
  • Experience with data quality frameworks (e.g., Great Expectations, Deequ or others).
  • Experience with Git/GitHub or similar for code versioning.
  • Experience with infrastructure-as-code tools (e.g., Terraform, CloudFormation).
  • Exposure to containerization/orchestration (Docker, Kubernetes).
  • Familiarity with data governance, data lineage, and catalog tools (e.g., Apache Atlas, Amundsen).
  • Hands-on with observability and monitoring tools for data pipelines (e.g., Monte Carlo, Datadog).
  • Knowledge of machine learning pipelines.
  • Prior experience in a trading or financial services environment.

Interview Process

Our partner and VP Eng will review your CV. Our VP of Engineering will conduct the first round of interviews. Our partner will conduct an additional round of interviews on technical and cultural fit. Additional rounds may be conducted as necessary with other team members or our partners.

Throughout the process, you will be assessed for cultural fit through our company values: Drive, Ownership, Judgement, Openness, Competence.

We are a rapidly growing hedge fund, 2 years old, managing a 9-figure AUM, generating 200%+ annualized returns with a 4 Sharpe. Our team has grown to approximately 40 professionals across Trading & Research, Technology, and Operations. As part of our growing team, you will play a pivotal role in designing and implementing robust data infrastructures that enable seamless research, analytical workflows, and effective trade ideation and execution. If you are an experienced data engineering leader with a passion for complex data systems, we want to hear from you!

Senior Data Engineer | London, UK | Remote employer: Hermeneutic Investments

At Hermeneutic Investments, we pride ourselves on being an exceptional employer, offering a dynamic work culture that fosters innovation and collaboration among our talented team in London. With a strong focus on employee growth, we provide ample opportunities for professional development and ownership in projects, ensuring that every team member can contribute meaningfully to our success. Our commitment to maintaining a supportive environment, combined with the chance to work on cutting-edge data infrastructure in the fast-paced financial sector, makes us an attractive choice for those seeking a rewarding career.
H

Contact Detail:

Hermeneutic Investments Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Senior Data Engineer | London, UK | Remote

✨Tip Number 1

Familiarise yourself with the specific technologies mentioned in the job description, such as Apache Airflow, DBT, and SQL. Being able to discuss your hands-on experience with these tools during the interview will demonstrate your capability and readiness for the role.

✨Tip Number 2

Prepare examples of how you've successfully collaborated with cross-functional teams in the past. This role requires clear communication and teamwork, so showcasing your ability to work well with others will be crucial in the interview process.

✨Tip Number 3

Stay updated on the latest trends and best practices in data management and engineering. Being knowledgeable about current technologies and methodologies will not only help you in the interview but also show your commitment to continuous learning.

✨Tip Number 4

Demonstrate your problem-solving skills by preparing to discuss specific challenges you've faced in previous roles and how you overcame them. This will highlight your analytical thinking and ability to drive innovation, which are key qualities for this position.

We think you need these skills to ace Senior Data Engineer | London, UK | Remote

Strong problem-solving skills
Analytical thinking
Clear communication skills
Proficiency in SQL and performance optimisation
Proficiency in Python or Java/Scala
Experience with ETL pipelines (Apache Airflow, dbt)
Understanding of dimensional modelling and schema design
Experience with data platforms (Snowflake, Redshift, BigQuery, Synapse)
Expert knowledge of Apache Spark, Kafka, Flink
Understanding of data security and privacy standards
Familiarity with cloud platforms (AWS, GCP, Azure)
Experience with data quality frameworks
Experience with Git/GitHub for version control
Experience with infrastructure-as-code tools (Terraform, CloudFormation)
Exposure to containerization/orchestration (Docker, Kubernetes)
Familiarity with data governance and lineage tools
Hands-on experience with observability and monitoring tools
Knowledge of machine learning pipelines
Prior experience in a trading or financial services environment

Some tips for your application 🫡

Tailor Your CV: Make sure your CV highlights relevant experience and skills that align with the job description. Focus on your proficiency in SQL, Python or Java/Scala, and any experience with ETL pipelines and data management practices.

Craft a Compelling Cover Letter: Write a cover letter that showcases your passion for data engineering and your understanding of the company's needs. Mention specific projects or experiences that demonstrate your ability to design and implement scalable data infrastructures.

Showcase Problem-Solving Skills: In your application, provide examples of how you've tackled complex data challenges in the past. Highlight your analytical thinking and problem-solving abilities, especially in relation to data quality and performance optimization.

Highlight Collaboration Experience: Emphasise your experience working with cross-functional teams. Mention any collaborative projects where you’ve worked closely with research or analytics teams to meet their data needs, as this is crucial for the role.

How to prepare for a job interview at Hermeneutic Investments

✨Showcase Your Technical Skills

Be prepared to discuss your experience with SQL, Python, and ETL tools like Apache Airflow. Bring examples of complex queries you've written or pipelines you've built, as this will demonstrate your hands-on expertise.

✨Understand the Company Culture

Familiarise yourself with the company's values such as Drive, Ownership, and Openness. Be ready to provide examples from your past experiences that align with these values, showcasing how you embody them in your work.

✨Prepare for Problem-Solving Questions

Expect to face technical challenges during the interview. Practice articulating your thought process when solving data-related problems, as this will highlight your analytical thinking and problem-solving skills.

✨Ask Insightful Questions

Prepare thoughtful questions about the company's data infrastructure and future projects. This shows your genuine interest in the role and helps you assess if the company aligns with your career goals.

Senior Data Engineer | London, UK | Remote
Hermeneutic Investments
H
  • Senior Data Engineer | London, UK | Remote

    London
    Full-Time
    54000 - 84000 £ / year (est.)

    Application deadline: 2027-04-23

  • H

    Hermeneutic Investments

Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>