At a Glance
- Tasks: Join us to build and scale data pipelines for cutting-edge AI workflows.
- Company: CoreWeave, the essential cloud for AI, fostering innovation and collaboration.
- Benefits: Enjoy competitive salary, health insurance, pension contributions, and tuition reimbursement.
- Other info: Dynamic, fast-paced environment with endless growth opportunities.
- Why this job: Make a real impact in the AI space while working with top talent.
- Qualifications: 5-6+ years in DataOps or similar roles, with strong pipeline and observability experience.
The predicted salary is between 60000 - 80000 ÂŁ per year.
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability. Founded in 2017, CoreWeave became a publicly traded company (Nasdaq: CRWV) in March 2025. Learn more at www.coreweave.com. We're proud to be a Living Wage accredited Employer.
What You’ll Do: The Monolith AI Platform Engineering Team at CoreWeave is responsible for building and scaling the data and workflow backbone that powers the world’s most advanced engineering simulation and AI workflows — our ambition is to become the super‑intelligent AI test lab for the engineering industry, helping customers ship science, faster. From high‑throughput data ingestion and feature pipelines to model training and real‑time inference, our platform delivers the performant, reliable, and trustworthy data foundation trusted by the world’s largest engineering companies. The Senior DataOps Engineer II will own and drive all things data observability and operations across our client estate — building the practices, tooling, and culture that make Monolith’s data flows debuggable, auditable, and safe to evolve. You’ll sit at the intersection of platform engineering, data engineering, and reliability, implementing end‑to‑end lineage and DataOps practices while mentoring data producers and consumers on how to manage data as a first‑class product. You’ll partner closely with Monolith’s Product, Engineering and forward‑deployed teams, as well as with CoreWeave’s infrastructure and AI platform groups, to turn fragmented, real‑world engineering data into well‑governed, observable, and operationally robust pipelines powering our SaaS platform and client‑specific deployments.
About the Role: We’re seeking a Senior DataOps Engineer II who can act as the hands‑on owner for Monolith’s data observability and operational surface: from batch and streaming pipelines running on our platform, through to the lineage, quality, and runbooks that keep customer environments healthy. You’ll define and roll out DataOps practices (CI/CD, infra‑as‑code, data SLOs, incident response) across the Monolith estate, implement end‑to‑end data lineage and observability, and serve as the go‑to mentor for engineering teams and client‑facing colleagues on best‑practice data management.
In this role, you will:
- Own Monolith’s Data Observability & Operations Surface
- Design and implement the end‑to‑end observability stack for data workloads (metrics, logs, traces, and data‑quality signals) across batch and streaming pipelines.
- Define and maintain operational SLOs/SLAs for critical data flows powering training, inference, and analytics, and ensure they are measurable and actionable.
- Build dashboards, alerts, and runbooks that allow engineers and on‑call responders to quickly detect, triage, and remediate data incidents.
- Standardise “golden paths” for how teams instrument pipelines, expose health signals, and respond to data‑related failures.
Implement Data Lineage, Quality & Governance
- Deploy and maintain end‑to‑end data lineage for key domains — from client sources through transformations to features, models, and downstream analytics so teams can debug, audit, and reason about change.
- Define and roll out data quality checks (schema, freshness, completeness, distribution, drift) and ensure failures integrate cleanly into alerting and incident workflows.
- Partner with Security, Compliance, and customer‑facing teams to encode data governance requirements (e.g., retention, residency, access controls) into our pipelines and tooling.
- Help shape metadata models and catalog conventions so that producers and consumers can reliably discover, understand, and use shared datasets.
Enable DataOps Practices Across Teams
- Establish CI/CD patterns for data pipelines and related infrastructure, including testing strategies, promotion workflows, and change‑management guardrails.
- Drive adoption of infra‑as‑code for data infrastructure (e.g., pipeline orchestration, storage, observability components), reducing manual drift across environments.
- Define and continuously improve DataOps processes — incident response, post‑incident review, change review, on‑call rotations — with a focus on learning rather than blame.
- Evaluate and integrate best‑of‑breed DataOps and observability tooling where it accelerates our teams, balancing build vs. buy pragmatically.
Partner Across Monolith, CoreWeave & Clients
- Work with Monolith platform, data, agent, and reliability teams to expose observability and lineage as shared services and patterns other engineers can build on.
- Collaborate with CoreWeave infrastructure and AI platform teams to leverage underlying storage, compute, networking, and observability in service of robust data flows.
- Serve as a technical escalation point for forward‑deployed and customer‑facing engineers when data issues cross service boundaries or require deeper architectural insight.
- Mentor data producers (product teams, integrations, forward‑deployed engineers) and data consumers (data scientists, analysts, client engineers) on resilient schemas, contracts, and operational practices.
Who You Are:
- Typically 5–6+ years of experience in DataOps, Data Engineering, DevOps/SRE for data platforms, or similar roles, including end‑to‑end ownership of production data pipelines and their operations.
- Proven track record of operating at Senior IC scope: leading cross‑team initiatives, introducing new practices/tooling, and improving reliability at the platform level.
- Strong hands‑on experience designing, deploying, and operating data pipelines in production (batch and/or streaming), including failure modes, retries, and backfills.
- Practical experience with data orchestration and ETL/ELT tooling (e.g., Airflow, Dagster, dbt, Temporal, or similar) and comfort evaluating and integrating new tools where appropriate.
- Solid SQL and/or Spark skills and experience with at least one major analytical database or warehouse; familiarity with time‑series / telemetry data is a plus.
- Extensive experience implementing data observability — metrics, logging, tracing, dashboards, and alerting — for data‑centric workloads.
- Hands‑on work with data quality frameworks and/or observability platforms to monitor freshness, completeness, schema changes, and anomalies.
- Experience deploying and using data lineage or metadata/catalog solutions, and applying them to debugging, compliance, and change‑impact analysis.
- Comfortable working in containerised, cloud‑native environments (Kubernetes plus at least one major cloud provider); experience with GPU‑ or compute‑intensive workloads is a bonus.
- Strong automation mindset: infra‑as‑code, CI/CD, and configuration management for data infrastructure and observability components.
- Proficient in Python for building tooling, pipeline glue, and platform integrations; additional languages are a plus.
- Clear communicator who can explain complex data flows and failure modes to both deeply technical and non‑specialist audiences.
- Experience mentoring engineers and data practitioners on better data management, observability, and operational hygiene — through documentation, examples, reviews, and office hours.
- Comfortable working in a fast‑moving, high‑ambiguity environment where we balance rapid iteration with the safety and reliability demanded by enterprise engineering clients.
Preferred:
- Experience in ML/AI platforms or MLOps environments where data pipelines power experimentation, training, and inference at scale.
- Background with test, simulation, or time‑series data (e.g., physical test benches, battery labs, automotive/aerospace R&D).
- Familiarity with feature stores, experiment tracking, or model registries and their interaction with upstream data pipelines.
- Prior work in multi‑tenant SaaS platforms, especially those with strong compliance, observability, and uptime requirements.
- Experience supporting or partnering closely with forward‑deployed / professional services teams in complex customer environments.
Wondering if you’re a good fit? We believe in investing in our people, and value candidates who bring diverse experiences — even if you don’t tick every single box. Here are a few qualities we’ve found compatible with our team. If some of this sounds like you, we’d love to talk:
- Data‑obsessed operator – You care deeply about making data systems observable, predictable, and easy to reason about, not just “working most of the time.”
- Systems thinker – You enjoy mapping complex data flows across services, understanding failure modes, and designing for graceful degradation and rapid recovery.
- Pragmatic – You know when to build the ideal abstraction and when to ship the smallest change that meaningfully reduces risk or toil.
- Collaborative mentor – You get energy from helping other teams level up their data practices, and you can influence without heavy process or authority.
- Owner’s mindset – You feel responsible for the outcomes of the platform as a whole, not just the code you write, and you follow issues across boundaries until they’re truly resolved.
Why CoreWeave? At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper‑growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:
- Be Curious at Your Core
- Act Like an Owner
- Empower Employees
- Deliver Best‑in‑Class Client Experiences
- Achieve More Together
We support and encourage an entrepreneurial outlook and independent thinking, and foster an environment that encourages collaboration and innovative solutions to complex problems. As we get set for takeoff, the organization’s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!
To fulfil our obligation to protect client data, successful applicants offered employment with CoreWeave will be required to complete a basic criminal record check, conducted in compliance with GDPR. Employment offers are conditional upon receiving satisfactory check results.
What We Offer: In addition to a competitive salary, we offer a variety of benefits to support your needs, including:
- Family-level Medical Insurance
- Family-level Dental Insurance
- Generous Pension Contribution
- Life Assurance at 4x Salary
- Critical Illness Cover
- Employee Assistance Programme
- Tuition Reimbursement
- Work culture focused on innovative disruption
Benefits may vary by location.
Our Workplace: While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.
CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.
CoreWeave does not accept speculative CVs. Any unsolicited CVs received will be treated as the property of CoreWeave and your Terms & Conditions associated with the use of CVs will be considered null and void.
Export Control Compliance: This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.
DataOps Engineer in London employer: CoreWeave Europe
Contact Detail:
CoreWeave Europe Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land DataOps Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects and contributions. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by practising common questions and scenarios related to DataOps. Think about how you can demonstrate your problem-solving skills and technical expertise during the chat.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are genuinely interested in joining our team!
We think you need these skills to ace DataOps Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the DataOps Engineer role. Highlight relevant experience and skills that align with what we’re looking for, like your hands-on experience with data pipelines and observability.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data observability and how your background makes you a great fit for our team at CoreWeave.
Showcase Your Projects: If you've worked on any cool projects related to data engineering or DataOps, don’t forget to mention them! We love seeing real-world applications of your skills and how you’ve tackled challenges.
Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It’s the easiest way for us to keep track of your application and ensure it reaches the right people!
How to prepare for a job interview at CoreWeave Europe
✨Know Your DataOps Inside Out
Make sure you brush up on your DataOps knowledge before the interview. Understand the key concepts like CI/CD, data lineage, and observability. Be ready to discuss how you've implemented these practices in past roles, as CoreWeave is looking for someone who can hit the ground running.
✨Showcase Your Problem-Solving Skills
Prepare to share specific examples of how you've tackled data-related challenges in previous positions. Think about times when you had to debug a data pipeline or improve data quality. This will demonstrate your hands-on experience and ability to think critically under pressure.
✨Familiarise Yourself with CoreWeave's Mission
Take some time to understand CoreWeave's vision and how they support AI innovation. Being able to articulate how your skills align with their goals will show that you're genuinely interested in the role and the company. Plus, it’ll help you connect your experience to their needs.
✨Ask Insightful Questions
Prepare thoughtful questions to ask during the interview. Inquire about their current data challenges, the tools they use, or how they envision the future of their DataOps practices. This not only shows your interest but also helps you gauge if the company is the right fit for you.