Member of Technical Staff - Data Platform
Member of Technical Staff - Data Platform

Member of Technical Staff - Data Platform

Full-Time 36000 - 60000 £ / year (est.) No home office possible
R

At a Glance

  • Tasks: Build and operate core data systems for AI research and production environments.
  • Company: Join a cutting-edge team from top AI companies like DeepMind and OpenAI.
  • Benefits: Top-tier salary, comprehensive health benefits, and generous parental leave.
  • Why this job: Make a real impact by creating the backbone for open superintelligence.
  • Qualifications: Strong data engineering skills and experience with large-scale data systems.
  • Other info: Collaborative culture with daily meals and regular team celebrations.

The predicted salary is between 36000 - 60000 £ per year.

Our Mission Reflection’s mission is to build open superintelligence and make it accessible to all. We’re developing open weight models for individuals, agents, enterprises, and even nation states. Our team of AI researchers and company builders come from DeepMind, OpenAI, Google Brain, Meta, Character.AI, Anthropic and beyond.

Vision Build and operate a company-wide foundations platform that accelerates every team by providing reliable, scalable developer infrastructure, SRE capabilities, and high-throughput data ingestion tooling enabling Reflection to move faster as we scale.

What This Team Does Build and operate the core data systems and pipelines that power our research, training, and production environments. This platform enables high-velocity experimentation, reliable model development, and scalable production workflows by unifying ingestion, processing, and orchestration across the data lifecycle.

  • Design ingestion and orchestration patterns for both batch and streaming workloads.
  • Build scalable compute and storage foundations (formats, engines, runtimes) that support large-scale data processing.
  • Ensure reproducible pipelines through versioning, backfills, and isolated execution environments.
  • Provide trusted data quality, lineage, and governance signals so teams can make confident production decisions.
  • Maintain predictable cost and performance through guardrails, budgets, and continuous system tuning.
  • Enable a unified data layer that supports research, training, and production across the model development lifecycle.

About the Role Build the core data systems and pipelines that power our research, training, and production environments. Design and implement reliable, scalable ingestion and orchestration patterns for batch and streaming workloads. Develop the storage and compute foundations that enable reproducible experimentation and high-velocity iteration. Drive data quality and governance standards that teams can trust for production decisions. Provide the foundational data layer that unifies ingestion, processing, and workflow management across model development.

What You’ll Work With:

  • Compute & Orchestration: Spark, Flink, Beam, Airflow, Dagster, Kafka, PubSub.
  • Storage & Analytics: Data lake and warehouse architectures, Parquet, Iceberg, Delta Lake, BigQuery, Snowflake.
  • Metadata & Data Quality: Lineage systems, metadata management, Great Expectations, reproducibility systems.
  • Cost & Performance Management: Partitioning strategies, clustering, cost optimization, SLA-driven pipelines.

About You Strong data engineering background with experience shipping production-grade pipelines at scale. Experience designing and owning end-to-end data systems handling large-scale batch and streaming workloads. Comfortable debugging complex pipeline failures, optimizing for cost and performance, and maintaining data quality. Thrive in a high-agency, fast-paced environment; bias toward action and impact. Excited about zero to one challenges — building new systems rather than maintaining legacy ones. Collaborative, clear communicator, comfortable working across research and infrastructure boundaries. Motivated by creating the trusted data backbone for the world’s most capable open-weight AI systems.

What We Offer We believe that to build superintelligence that is truly open, you need to start at the foundation. Joining Reflection means building from the ground up as part of a small talent-dense team. You will help define our future as a company, and help define the frontier of open foundational models. We want you to do the most impactful work of your career with the confidence that you and the people you care about most are supported.

  • Top-tier compensation: Salary and equity structured to recognize and retain the best talent globally.
  • Health & wellness: Comprehensive medical, dental, vision, life, and disability insurance.
  • Life & family: Fully paid parental leave for all new parents, including adoptive and surrogate journeys. Financial support for family planning.
  • Benefits & balance: paid time off when you need it, relocation support, and more perks that optimize your time.
  • Opportunities to connect with teammates: lunch and dinner are provided daily. We have regular off-sites and team celebrations.

Member of Technical Staff - Data Platform employer: Reflection AI

At Reflection, we are committed to building a collaborative and innovative work environment where every team member plays a crucial role in shaping the future of open superintelligence. Our culture prioritises employee well-being with top-tier compensation, comprehensive health benefits, and generous parental leave, ensuring that you can focus on impactful work while enjoying a balanced life. Join us in a dynamic setting that fosters growth and creativity, as we develop cutting-edge data systems that empower AI research and production.
R

Contact Detail:

Reflection AI Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Member of Technical Staff - Data Platform

✨Tip Number 1

Network like a pro! Reach out to folks in the industry, especially those who work at Reflection or similar companies. Use LinkedIn to connect and don’t be shy about asking for informational chats – it’s all about making those connections!

✨Tip Number 2

Prepare for your interviews by diving deep into the tech stack mentioned in the job description. Brush up on Spark, Flink, and Kafka, and be ready to discuss how you’ve tackled data engineering challenges in the past. Show us your passion for building scalable systems!

✨Tip Number 3

Don’t just apply – engage with our content! Follow Reflection on social media, comment on posts, and share your thoughts on open superintelligence. This shows your enthusiasm and helps you stand out from the crowd.

✨Tip Number 4

When you get that interview, come armed with questions! Ask about the team dynamics, the challenges they face, and how you can contribute to building the data backbone for their AI systems. It shows you’re genuinely interested and ready to jump in!

We think you need these skills to ace Member of Technical Staff - Data Platform

Data Engineering
Pipeline Development
Batch and Streaming Workloads
Debugging Complex Pipelines
Cost Optimization
Data Quality Management
Collaboration
Communication Skills
Spark
Flink
Beam
Airflow
Kafka
Data Lake Architectures
Metadata Management

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter to reflect the specific skills and experiences that match the job description. Highlight your data engineering background and any relevant projects you've worked on that align with our mission.

Showcase Your Technical Skills: When detailing your experience, focus on the tools and technologies mentioned in the job description, like Spark, Kafka, or BigQuery. We want to see how you've used these in real-world scenarios, so don't hold back!

Be Clear and Concise: Keep your application straightforward and to the point. Use bullet points for easy reading and make sure to clearly outline your achievements and contributions in previous roles. We appreciate clarity and directness!

Apply Through Our Website: We encourage you to submit your application through our website. This way, we can ensure it gets to the right people quickly. Plus, it shows you're keen on joining our team at Reflection!

How to prepare for a job interview at Reflection AI

✨Know Your Data Tools

Familiarise yourself with the specific tools mentioned in the job description, like Spark, Flink, and Kafka. Be ready to discuss your experience with these technologies and how you've used them to build scalable data systems.

✨Showcase Your Problem-Solving Skills

Prepare examples of complex pipeline failures you've debugged or optimised. Highlight your thought process and the steps you took to resolve issues, as this will demonstrate your ability to thrive in a fast-paced environment.

✨Understand the Importance of Data Quality

Be prepared to talk about how you ensure data quality and governance in your projects. Discuss any tools or strategies you've implemented to maintain trusted data, as this is crucial for making confident production decisions.

✨Emphasise Collaboration

Since the role involves working across research and infrastructure boundaries, share experiences where you've successfully collaborated with different teams. This will show that you're a clear communicator and can work well in a team-oriented environment.

Member of Technical Staff - Data Platform
Reflection AI

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>