At a Glance
- Tasks: Build and scale data services for cutting-edge AI workflows.
- Company: Join CoreWeave, the essential cloud for AI innovation.
- Benefits: Enjoy competitive salary, health insurance, and tuition reimbursement.
- Other info: Hybrid work environment with excellent career growth opportunities.
- Why this job: Make a real impact in the AI engineering industry with your skills.
- Qualifications: 8+ years in data engineering with strong architecture experience.
The predicted salary is between 80000 - 100000 ÂŁ per year.
CoreWeave is The Essential Cloud for AI. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines infrastructure performance with deep technical expertise to accelerate breakthroughs. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. We are proud to be a Living Wage accredited Employer.
The Monolith AI Platform Engineering Team at CoreWeave is responsible for building and scaling the data and workflow backbone that powers the world’s most advanced engineering simulation and AI workflows. The ambition is to become the super‑intelligent AI test lab for the engineering industry, helping customers ship science, faster. From high‑throughput data ingestion and feature pipelines to model training and real‑time inference, the platform delivers a performant, reliable, and trustworthy data foundation trusted by the world’s largest engineering companies.
The Staff Data Engineer will own and evolve Monolith’s platform data services and ETL offerings — the data onboarding, preparation, and lineage capabilities that turn fragmented, real‑world engineering data into production‑ready training and inference pipelines. You’ll partner with Product, Engineering, and Customer‑facing teams to deeply understand client data challenges and translate them into scalable, self‑serve data platform features.
In This Role, You Will:
- Own Monolith’s Data Platform & ETL Surface: Lead the architecture and evolution of core data services for ingestion, transformation, validation, and lineage across training and inference workloads. Design and maintain end‑to‑end data models and schemas that make complex engineering, simulation, and telemetry data discoverable, reusable, and performant. Define standards, contracts, and APIs for how product teams and integrations interact with data services.
- Design & Operate Batch + Streaming Pipelines: Build and operate batch pipelines for large‑scale historical imports, retraining data sets, and migrations from legacy environments. Design and implement streaming pipelines (e.g., using Kafka or similar technologies) for event‑driven or real‑time ingestion and transformation that support online inference, monitoring, and feedback loops. Select and integrate off‑the‑shelf ETL / ELT technologies and own their rollout and long‑term operation.
- Champion Data Lineage, Governance & DataOps: Implement and maintain end‑to‑end data lineage from source systems to derived features and model artifacts, enabling reproducibility, compliance, and faster debugging. Establish DataOps practices: CI/CD for pipelines, observability (metrics, logs, traces), and operational runbooks for data incidents. Help define data quality and governance standards with Security, Compliance, and Customer Success, including privacy and regulatory needs.
- Partner Across Monolith & CoreWeave: Collaborate with Monolith product and engineering teams to expose data services that unlock new user workflows and AI capabilities. Work with CoreWeave infrastructure and AI platform teams to leverage storage, compute, and observability for reliable data flows. Serve as a technical escalation point for forward‑deployed and customer‑facing engineers when questions go deeper than their playbooks, including architecture diagrams of data flow, lineage, and governance constraints.
Who You Are:
- Experience & Level: Typically 8+ years of experience as a Data Engineer / Data Platform Engineer (or similar), including ownership of production data pipelines and architectural decisions. Demonstrated Staff‑level impact: leading critical data domains and cross‑team initiatives.
- Data Engineering & Architecture: Deep experience designing end‑to‑end data architectures that cover ingestion, storage, transformation, serving, and observability. Strong, hands‑on experience with both batch and streaming pipelines: Batch: historical backfills; Streaming: Kafka or similar platforms with real‑time transformation. Proficiency with SQL and at least one major analytical database or data warehouse (e.g., PostgreSQL) including schema design and performance tuning. Proficiency with Spark, Ray or similar distributed data processing frameworks. Solid understanding of data modeling in multi‑tenant SaaS or platform contexts.
- Tooling & Ecosystem: Hands‑on with data orchestration and ETL tooling (e.g., Airflow, dbt, Dagster, Temporal) and able to evaluate and recommend tools that fit our needs. Experience integrating and operating off‑the‑shelf data infrastructure, including rollout and ongoing ownership. Familiarity with cloud infrastructure and containerization (Docker, Kubernetes, and major cloud providers) for deploying data workloads.
- Data Lineage, Quality & DataOps: Extensive experience implementing data lineage solutions for debugging, compliance, and auditability. Strong background in data quality with validation, monitoring, and guardrails. Proficiency with DevOps / DataOps: infra‑as‑code, CI/CD for pipelines, runbooks, and on‑call participation.
- Programming, Systems & Communication: Strong Python programming for data services and platform integrations, with emphasis on maintainability and tests. Experience in service‑oriented architectures with data contracts, SLAs, and failure modes. Clear written and verbal communicator who can explain data architectures to internal stakeholders and occasionally join client conversations as a deep domain expert.
Preferred:
- Experience in ML/AI platforms or MLOps where data pipelines feed experimentation, training, and inference workflows.
- Background with time‑series, simulation, or experimental data.
- Familiarity with feature stores, experiment tracking systems, or model registries and their integration with upstream pipelines.
- Experience designing data systems for regulated or safety‑critical domains, including privacy, residency, and retention considerations.
Additional Information: We include an obligation to protect client data; an applicant may need to complete a basic criminal record check in compliance with GDPR. Employment offers are conditional upon satisfactory check results.
What We Offer: In addition to a competitive salary, we offer a variety of benefits to support your needs, including family‑level medical and dental insurance, pension contribution, life assurance, critical illness cover, employee assistance program, tuition reimbursement, and a focus on innovative disruption. Benefits may vary by location.
Our Workplace: CoreWeave supports a hybrid work environment with remote work possible for certain locations. New hires attend onboarding at a hub within their first month. Teams gather quarterly to collaborate.
Equal Opportunity: CoreWeave is an equal opportunity employer, committed to fostering an inclusive workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.
Export Control & Privacy: This position may require access to export controlled information. The applicant must meet U.S. export regulations. Updated privacy notice for UK and EU job applications is provided, including data processing for recruitment and related rights under GDPR and UK GDPR. We may share data with Greenhouse Software, Inc. and other providers as part of recruitment processes. You have rights to access, rectify, erase, restrict processing, and data portability of your personal data.
Staff Data Engineer in London employer: CoreWeave
Contact Detail:
CoreWeave Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Staff Data Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at meetups. A personal connection can often get your foot in the door faster than any application.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects and contributions. This gives potential employers a taste of what you can do beyond your CV.
✨Tip Number 3
Prepare for interviews by practising common questions and scenarios related to data engineering. We recommend doing mock interviews with friends or using online platforms to boost your confidence.
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our team at CoreWeave.
We think you need these skills to ace Staff Data Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Staff Data Engineer role. Highlight your experience with data architectures, ETL processes, and any relevant technologies like Kafka or Spark. We want to see how your skills align with what we’re looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how you can contribute to our mission at CoreWeave. Don’t forget to mention specific projects or experiences that relate to the job description.
Showcase Your Technical Skills: In your application, be sure to showcase your technical skills clearly. Mention your proficiency in SQL, Python, and any data orchestration tools you’ve used. We love seeing hands-on experience, so don’t hold back on the details!
Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It’s super easy, and you’ll be able to submit all your materials in one go. Plus, it helps us keep track of your application better!
How to prepare for a job interview at CoreWeave
✨Know Your Data Inside Out
As a Staff Data Engineer, you'll be expected to have a deep understanding of data architectures and pipelines. Brush up on your knowledge of batch and streaming processes, especially with tools like Kafka. Be ready to discuss specific projects where you've designed or optimised data flows.
✨Showcase Your Collaboration Skills
You'll be working closely with various teams, so it's crucial to demonstrate your ability to collaborate effectively. Prepare examples of how you've partnered with product, engineering, or customer-facing teams in the past to solve complex data challenges.
✨Prepare for Technical Questions
Expect to face technical questions that test your knowledge of SQL, data modelling, and ETL processes. Review common scenarios you might encounter in the role, such as ensuring data quality and lineage, and be ready to explain your thought process clearly.
✨Communicate Clearly and Confidently
As a domain expert, you'll need to explain complex concepts to both technical and non-technical stakeholders. Practice articulating your ideas succinctly and confidently, using clear examples from your experience to illustrate your points.