At a Glance
- Tasks: Lead the design and evolution of data services for AI workflows.
- Company: CoreWeave, a pioneering cloud platform for AI innovation.
- Benefits: Competitive salary, family-level medical insurance, and tuition reimbursement.
- Other info: Dynamic work culture focused on collaboration and innovation.
- Why this job: Join a fast-growing team and shape the future of engineering simulations.
- Qualifications: 8+ years in data engineering with strong architecture skills.
The predicted salary is between 80000 - 100000 ÂŁ per year.
CoreWeave is The Essential Cloud for AI™. Built for pioneers by pioneers, CoreWeave delivers a platform of technology, tools, and teams that enables innovators to build and scale AI with confidence. Trusted by leading AI labs, startups, and global enterprises, CoreWeave combines superior infrastructure performance with deep technical expertise to accelerate breakthroughs and turn compute into capability.
What You’ll Do: The Monolith AI Platform Engineering Team at CoreWeave is responsible for building and scaling the data and workflow backbone that powers the world’s most advanced engineering simulation and AI workflows — our ambition is to become the super‑intelligent AI test lab for the engineering industry, helping customers ship science, faster. From high‑throughput data ingestion and feature pipelines to model training and real‑time inference, our platform delivers the performant, reliable, and trustworthy data foundation trusted by the world’s largest engineering companies. The Staff Data Engineer will own and evolve Monolith’s platform data services and ETL offerings — the data onboarding, preparation, and lineage capabilities that turn fragmented, real‑world engineering data into production‑ready training and inference pipelines. You’ll partner with Product, Engineering, and Customer‑facing teams to deeply understand client data challenges and translate them into scalable, self‑serve data platform features.
About the Role: We’re seeking a Staff Data Engineer who can own Monolith’s data platform surface end‑to‑end: from offline batch pipelines and large historical backfills to low‑latency, real‑time streaming data flows that power online inference and feedback loops. You’ll define and drive our data architecture, champion data quality and lineage, and decide how customer data moves through Monolith from raw ingestion to governed, observable, and reproducible training sets. You’ll primarily work with internal teams (Product, Customer Success, Forward‑Deployed Engineers, Software Engineers, Data Scientists), and step in as a domain expert when clients need deeper guidance.
In this role, you will:
- Own Monolith’s Data Platform & ETL Surface: Lead the architecture and evolution of core data services for ingestion, transformation, validation, and lineage across training and inference workloads. Design and maintain end‑to‑end data models and schemas that make complex engineering, simulation, and telemetry data discoverable, reusable, and performant. Define standards, contracts, and APIs for how product teams and integrations interact with data services.
- Design & Operate Batch + Streaming Pipelines: Build and operate batch pipelines for large‑scale historical imports, retraining data sets, and migrations from legacy environments. Design and implement streaming pipelines (e.g., using Kafka or similar technologies) for event‑driven or real‑time ingestion and transformation that support online inference, monitoring, and feedback loops. Select and integrate off‑the‑shelf, industry‑proven ETL / ELT technologies (e.g., workflow orchestrators, ingestion frameworks, transformation engines) and own their rollout and long‑term operation.
- Champion Data Lineage, Governance & DataOps: Implement and maintain end‑to‑end data lineage from source systems to derived features and model artifacts, enabling reproducibility, compliance, and faster debugging. Establish DataOps practices: CI/CD for pipelines, observability (metrics, logs, traces), and operational runbooks for data incidents. Help define data quality and governance standards in partnership with Security, Compliance, and Customer Success, including support for privacy and regulatory needs (e.g., GDPR‑aligned flows and data residency discussions).
- Partner Across Monolith & CoreWeave: Work with Monolith product and engineering teams (platform, agents, experiments, reliability) to expose data services that unlock new user workflows and AI capabilities. Collaborate with CoreWeave infrastructure and AI platform teams to leverage storage, compute, and observability for reliable data flows. Serve as a technical escalation point for forward‑deployed and customer‑facing engineers when questions go deeper than their playbooks (e.g., architecture diagrams of data flow, lineage, and governance constraints).
Who You Are:
- Experience & Level: Typically 8+ years of experience as a Data Engineer / Data Platform Engineer (or similar), including ownership of production data pipelines and architectural decisions. Demonstrated Staff‑level impact: leading critical data domains and cross‑team initiatives.
- Data Engineering & Architecture: Deep experience designing end‑to‑end data architectures that cover ingestion, storage, transformation, serving, and observability. Strong, hands‑on experience with both batch and streaming pipelines: Batch: scheduled workloads, historical backfills, retraining pipelines. Streaming: Kafka or similar event streaming platforms plus real‑time transformation patterns for low‑latency consumption. Proficiency with SQL and at least one major analytical database or data warehouse (e.g., PostgreSQL or similar) including schema design and performance tuning. Proficiency with Spark, Ray or similar distributed data processing frameworks. Solid understanding of data modeling (event logs, star schemas, feature tables) in multi‑tenant SaaS or platform contexts.
- Tooling & Ecosystem: Hands‑on with data orchestration and ETL tooling (e.g., Airflow, dbt, Dagster, Temporal, or equivalents) and able to evaluate and recommend tools that fit our needs. Experience integrating and operating off‑the‑shelf data infrastructure (not just bespoke systems), including rollout, migration plans, and ongoing ownership. Familiarity with cloud infrastructure and containerization (Docker, Kubernetes, at least one major cloud provider) for deploying and scaling data workloads.
- Data Lineage, Quality & DataOps: Extensive experience implementing data lineage solutions and using them for debugging, compliance, and auditability. Strong background in data quality: validation frameworks, monitoring, and guardrails that prevent bad data from reaching downstream consumers. Proficiency with DevOps / DataOps practices: infra‑as‑code, CI/CD for pipelines, runbooks, and participation in on‑call / incident response for data issues.
- Programming, Systems & Communication: Strong programming skills in Python for building data services, transformations, and platform integrations, with an emphasis on maintainability and tests. Comfortable working in service‑oriented architectures, reasoning about data contracts, SLAs, and failure modes across services. Clear written and verbal communicator who can explain data architectures and trade‑offs to internal stakeholders, and occasionally join client conversations as a deep domain expert to answer questions about data flows, security, and governance (e.g., with CIOs, heads of data, or security counterparts), while day‑to‑day client interaction is handled by forward‑deployed teams.
Preferred:
- Experience in ML/AI platforms or MLOps environments where data pipelines feed experimentation, training, and inference workflows at scale.
- Background with time‑series, simulation, or experimental data (e.g., physical test benches, sensors, engineering simulations).
- Familiarity with feature stores, experiment tracking systems, or model registries and their integration with upstream pipelines.
- Experience designing data systems for regulated or safety‑critical domains, including privacy, residency, and retention considerations.
Wondering if you’re a good fit? We believe in investing in our people, and value candidates who can bring their own diversified experiences to our teams — even if you aren’t a 100% skill or experience match. If some of this describes you, we’d love to talk:
- Product‑minded – you design data platforms and pipelines that are intuitive to use and unlock real workflows for engineers and customers.
- Systems thinker – you enjoy mapping complex data flows, understanding failure modes, and building robust, observable systems.
- Pragmatic – you balance solid engineering with shipping value quickly, knowing when to invest in abstractions and when to deliver incremental improvements.
- Collaborative owner – you partner well across engineering, product, and customer‑facing teams, and you feel responsible for outcomes, not just code.
Why CoreWeave? At CoreWeave, we work hard, have fun, and move fast! We’re in an exciting stage of hyper‑growth that you will not want to miss out on. We’re not afraid of a little chaos, and we’re constantly learning. Our team cares deeply about how we build our product and how we work together, which is represented through our core values:
- Be Curious at Your Core
- Act Like an Owner
- Empower Employees
- Deliver Best‑in‑Class Client Experiences
- Achieve More Together
We support and encourage an entrepreneurial outlook and independent thinking, and foster an environment that encourages collaboration and innovative solutions to complex problems. As we get set for takeoff, the organization’s growth opportunities are constantly expanding. You will be surrounded by some of the best talent in the industry, who will want to learn from you, too. Come join us!
To fulfil our obligation to protect client data, successful applicants offered employment with CoreWeave will be required to complete a basic criminal record check, conducted in compliance with GDPR. Employment offers are conditional upon receiving satisfactory check results.
What We Offer: In addition to a competitive salary, we offer a variety of benefits to support your needs, including:
- Family-level Medical Insurance
- Family-level Dental Insurance
- Generous Pension Contribution
- Life Assurance at 4x Salary
- Critical Illness Cover
- Employee Assistance Programme
- Tuition Reimbursement
- Work culture focused on innovative disruption
Benefits may vary by location.
Our Workplace: While we prioritize a hybrid work environment, remote work may be considered for candidates located more than 30 miles from an office, based on role requirements for specialized skill sets. New hires will be invited to attend onboarding at one of our hubs within their first month. Teams also gather quarterly to support collaboration.
CoreWeave is an equal opportunity employer, committed to fostering an inclusive and supportive workplace. All qualified applicants and candidates will receive consideration for employment without regard to race, color, religion, sex, disability, age, sexual orientation, gender identity, national origin, veteran status, or genetic information.
CoreWeave does not accept speculative CVs. Any unsolicited CVs received will be treated as the property of CoreWeave and your Terms & Conditions associated with the use of CVs will be considered null and void.
Export Control Compliance: This position requires access to export controlled information. To conform to U.S. Government export regulations applicable to that information, applicant must either be (A) a U.S. person, defined as a (i) U.S. citizen or national, (ii) U.S. lawful permanent resident (green card holder), (iii) refugee under 8 U.S.C. § 1157, or (iv) asylee under 8 U.S.C. § 1158, (B) eligible to access the export controlled information without a required export authorization, or (C) eligible and reasonably likely to obtain the required export authorization from the applicable U.S. government agency. CoreWeave may, for legitimate business reasons, decline to pursue any export licensing process.
Staff Data Engineer employer: CoreWeave Europe
Contact Detail:
CoreWeave Europe Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Staff Data Engineer
✨Tip Number 1
Network like a pro! Reach out to folks in your industry, especially those at CoreWeave. A friendly chat can open doors that a CV just can't.
✨Tip Number 2
Show off your skills! If you’ve got a portfolio or projects that highlight your data engineering prowess, share them during interviews. It’s a great way to demonstrate your expertise.
✨Tip Number 3
Prepare for the technical grill! Brush up on your data architecture and pipeline knowledge. Be ready to discuss how you’d tackle real-world challenges at CoreWeave.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets the attention it deserves. Plus, we love seeing candidates who take that extra step!
We think you need these skills to ace Staff Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Staff Data Engineer role. Highlight relevant experience and skills that match the job description, especially around data architecture and pipeline management. We want to see how you can contribute to our mission!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how your background aligns with our goals at CoreWeave. Be genuine and let your personality come through!
Showcase Your Projects: If you've worked on any cool data projects, don’t hold back! Include links or descriptions of your work that demonstrate your expertise in building data pipelines and handling complex data challenges. We love seeing real-world applications of your skills!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team at CoreWeave!
How to prepare for a job interview at CoreWeave Europe
✨Know Your Data Inside Out
As a Staff Data Engineer, you’ll be expected to have a deep understanding of data architectures and pipelines. Brush up on your knowledge of batch and streaming processes, especially with tools like Kafka. Be ready to discuss how you've designed and optimised data flows in previous roles.
✨Showcase Your Problem-Solving Skills
CoreWeave values innovative solutions to complex problems. Prepare examples of challenges you've faced in data engineering and how you tackled them. Highlight your experience with DataOps practices and how they’ve improved data quality and governance in your past projects.
✨Collaborate Like a Pro
This role requires working closely with various teams. Be prepared to discuss how you’ve partnered with product, engineering, and customer-facing teams in the past. Share specific instances where your collaboration led to successful outcomes or improved workflows.
✨Communicate Clearly and Confidently
Strong communication skills are key for this position. Practice explaining complex data concepts in simple terms, as you may need to engage with non-technical stakeholders. Think about how you can convey your ideas effectively during the interview, especially when discussing data governance and lineage.