At a Glance
- Tasks: Own and enhance data observability and operations for our cutting-edge platform.
- Company: Join a forward-thinking tech company focused on innovative disruption.
- Benefits: Enjoy comprehensive health insurance, generous pension contributions, and tuition reimbursement.
- Other info: Collaborative environment with opportunities for mentorship and career growth.
- Why this job: Make a real impact by optimising data pipelines and enhancing operational practices.
- Qualifications: 5-6+ years in DataOps or similar roles with strong technical skills.
The predicted salary is between 70000 - 90000 ÂŁ per year.
About the Role
We’re seeking a Senior DataOps Engineer II who can act as the hands‑on owner for Monolith’s data observability and operational surface: from batch and streaming pipelines running on our platform, through to the lineage, quality, and runbooks that keep customer environments healthy.
In This Role, You Will
- Own Monolith’s Data Observability & Operations Surface Design and implement the end‑to‑end observability stack for data workloads (metrics, logs, traces, and data‑quality signals) across batch and streaming pipelines. Define and maintain operational SLOs/SLAs for critical data flows powering training, inference, and analytics, and ensure they are measurable and actionable. Build dashboards, alerts, and runbooks that allow engineers and on‑call responders to quickly detect, triage, and remediate data incidents. Standardise “golden paths” for how teams instrument pipelines, expose health signals, and respond to data‑related failures.
- Implement Data Lineage, Quality & Governance Deploy and maintain end‑to‑end data lineage for key domains — from client sources through transformations to features, models, and downstream analytics so teams can debug, audit, and reason about change. Define and roll out data quality checks (schema, freshness, completeness, distribution, drift) and ensure failures integrate cleanly into alerting and incident workflows. Partner with Security, Compliance, and customer‑facing teams to encode data governance requirements (e.g., retention, residency, access controls) into our pipelines and tooling. Help shape metadata models and catalog conventions so that producers and consumers can reliably discover, understand, and use shared datasets.
- Enable DataOps Practices Across Teams Establish CI/CD patterns for data pipelines and related infrastructure, including testing strategies, promotion workflows, and change‑management guardrails. Drive adoption of infra‑as‑code for data infrastructure (e.g., pipeline orchestration, storage, observability components), reducing manual drift across environments. Define and continuously improve DataOps processes — incident response, post‑incident review, change review, on‑call rotations — with a focus on learning rather than blame. Evaluate and integrate best‑of‑breed DataOps and observability tooling where it accelerates our teams, balancing build vs. buy pragmatically.
- Partner Across Monolith, CoreWeave & Clients Work with Monolith platform, data, agent, and reliability teams to expose observability and lineage as shared services and patterns other engineers can build on. Collaborate with CoreWeave infrastructure and AI platform teams to leverage underlying storage, compute, networking, and observability in service of robust data flows. Serve as a technical escalation point for forward‑deployed and customer‑facing engineers when data issues cross service boundaries or require deeper architectural insight. Mentor data producers (product teams, integrations, forward‑deployed engineers) and data consumers (data scientists, analysts, client engineers) on resilient schemas, contracts, and operational practices.
Who You Are
- Experience & Level Typically 5–6+ years of experience in DataOps, Data Engineering, DevOps/SRE for data platforms, or similar roles, including end‑to‑end ownership of production data pipelines and their operations. Proven track record of operating at Senior IC scope: leading cross‑team initiatives, introducing new practices/tooling, and improving reliability at the platform level.
- DataOps, Pipelines & Tooling Strong hands‑on experience designing, deploying, and operating data pipelines in production (batch and/or streaming), including failure modes, retries, and backfills. Practical experience with data orchestration and ETL/ELT tooling (e.g., Airflow, Dagster, dbt, Temporal, or similar) and comfort evaluating and integrating new tools where appropriate. Solid SQL and/or Spark skills and experience with at least one major analytical database or warehouse; familiarity with time‑series / telemetry data is a plus.
- Observability, Lineage & Data Quality Extensive experience implementing data observability — metrics, logging, tracing, dashboards, and alerting — for data‑centric workloads. Hands‑on work with data quality frameworks and/or observability platforms to monitor freshness, completeness, schema changes, and anomalies. Experience deploying and using data lineage or metadata/catalog solutions, and applying them to debugging, compliance, and change‑impact analysis.
- Platform, Infrastructure & Automation Comfortable working in containerised, cloud‑native environments (Kubernetes plus at least one major cloud provider); experience with GPU‑ or compute‑intensive workloads is a bonus. Strong automation mindset: infra‑as‑code, CI/CD, and configuration management for data infrastructure and observability components. Proficient in Python for building tooling, pipeline glue, and platform integrations; additional languages are a plus.
- Collaboration, Mentorship & Communication Clear communicator who can explain complex data flows and failure modes to both deeply technical and non‑specialist audiences. Experience mentoring engineers and data practitioners on better data management, observability, and operational hygiene — through documentation, examples, reviews, and office hours. Comfortable working in a fast‑moving, high‑ambiguity environment where we balance rapid iteration with the safety and reliability demanded by enterprise engineering clients.
Preferred
- Experience in ML/AI platforms or MLOps environments where data pipelines power experimentation, training, and inference at scale.
- Background with test, simulation, or time‑series data (e.g., physical test benches, battery labs, automotive/aerospace R&D).
- Familiarity with feature stores, experiment tracking, or model registries and their interaction with upstream data pipelines.
- Prior work in multi‑tenant SaaS platforms, especially those with strong compliance, observability, and uptime requirements.
- Experience supporting or partnering closely with forward‑deployed / professional services teams in complex customer environments.
Benefits
- Family-level Medical Insurance
- Family-level Dental Insurance
- Generous Pension Contribution
- Life Assurance at 4x Salary
- Critical Illness Cover
- Employee Assistance Programme
- Tuition Reimbursement
- Work culture focused on innovative disruption
Equal Employment Opportunity
CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.
DataOps Engineer in London employer: CoreWeave
Contact Detail:
CoreWeave Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land DataOps Engineer in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects and contributions. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by practising common questions and scenarios related to DataOps. Think about how you’d handle data incidents or improve observability – this will help you shine during those crucial conversations.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive about their job search!
We think you need these skills to ace DataOps Engineer in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the DataOps Engineer role. Highlight your relevant experience with data pipelines, observability, and any tools mentioned in the job description. We want to see how your skills align with what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about DataOps and how your background makes you a great fit for our team. Don’t forget to mention any specific projects or achievements that relate to the role.
Showcase Your Technical Skills: Be sure to include any hands-on experience you have with data orchestration tools, SQL, or Python. We love seeing practical examples of how you've implemented data observability or quality checks in your previous roles.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows us you’re keen on joining our team at StudySmarter!
How to prepare for a job interview at CoreWeave
✨Know Your DataOps Inside Out
Make sure you brush up on your DataOps knowledge, especially around observability and pipeline management. Be ready to discuss your hands-on experience with tools like Airflow or dbt, and how you've implemented data quality checks in past roles.
✨Showcase Your Problem-Solving Skills
Prepare to share specific examples of how you've tackled data incidents or failures in the past. Highlight your approach to incident response and how you’ve improved processes to prevent future issues. This will demonstrate your ability to think critically under pressure.
✨Communicate Clearly and Confidently
Practice explaining complex data flows and technical concepts in simple terms. You might be asked to explain your work to non-technical stakeholders, so being able to communicate effectively is key. Use examples from your experience to illustrate your points.
✨Be Ready to Collaborate
Expect questions about how you work with cross-functional teams. Think of instances where you've partnered with engineers, data scientists, or compliance teams. Emphasise your mentorship experiences and how you’ve helped others understand data governance and operational practices.