At a Glance
- Tasks: Build and scale machine learning systems for content safety and policy enforcement.
- Company: Join Spotify, a leader in creating joyful listening experiences for billions.
- Benefits: Flexible work environment, competitive salary, and opportunities for professional growth.
- Other info: Work in a dynamic team with opportunities to mentor and grow your skills.
- Why this job: Make a real impact on user safety while working with cutting-edge technology.
- Qualifications: Experience in machine learning systems and strong collaboration skills required.
The predicted salary is between 70000 - 90000 ÂŁ per year.
We design Spotify’s consumer experience—end to end, moment to moment, across every screen, platform, and partner integration. Our mission is to make listening feel effortless, personal, and joyful for billions of users around the world. That means turning complexity into clarity across hundreds of touchpoints—from our mobile and desktop apps to the smart speakers, TVs, cars, and integrations where Spotify shows up every day. If it touches a consumer, we shape it. We bring deep insight into human behavior, design, and technology to craft experiences that feel intuitive, expressive, and unmistakably Spotify.
About the Team
The Policy & Safety team sits within the Content Platform domain and builds the systems that keep Spotify safe and trustworthy at scale. We own the infrastructure behind content moderation, including detection models, policy enforcement systems, compliance pipelines, and the safety-by-default platform. Our work is critical to every new content type and product experience—from messaging and comments to collaborative and emerging AI-driven features. We partner closely with Trust & Safety, Legal, and Public Affairs to ensure that safety is built into Spotify experiences from the start.
What You Will Do
- Build and scale machine learning systems for proactive content detection, classification, and pre‑publish safety scanning
- Design and implement policy evaluation frameworks, including standardized datasets, offline and online metrics, and continuous improvement loops
- Develop multimodal models that combine text, audio, image, and video signals for safety and policy enforcement
- Architect feedback loops that turn reviewer input into structured training data for continuous model improvement
- Translate regulatory requirements into scalable ML system designs, including accuracy and reporting expectations
- Partner with cross‑functional teams across Trust & Safety, Legal, Public Affairs, and Product to deliver safe user experiences
- Drive technical direction in ambiguous problem spaces and contribute to long‑term platform architecture
- Mentor and support other machine learning engineers, helping grow technical capability across the team
Who You Are
- You have experience building and shipping production‑grade machine learning systems at scale
- You are experienced with ML evaluation, including dataset design, metrics, and model performance monitoring
- You have worked with multimodal machine learning across text, audio, image, or video domains
- You have experience with human‑in‑the‑loop systems, active learning, or feedback‑driven model improvement
- You are comfortable translating complex requirements into technical solutions, including policy or regulatory constraints
- You are experienced working across teams and influencing technical direction in large systems
- You are comfortable navigating ambiguity and making thoughtful trade‑offs between speed, quality, and risk
- You communicate clearly and collaborate effectively with both technical and non‑technical partners
Where You Will Be
This role is based in London or Stockholm. We offer you the flexibility to work where you work best! There will be some in‑person meetings, but still allows for flexibility to work from home.
Staff Machine Learning Engineer - Safety & Policy in London employer: Creandum
Contact Detail:
Creandum Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Staff Machine Learning Engineer - Safety & Policy in London
✨Tip Number 1
Network like a pro! Reach out to folks in the industry, especially those at Spotify or similar companies. A friendly chat can open doors and give you insights that a job description just can't.
✨Tip Number 2
Show off your skills! If you've got a portfolio or projects that highlight your machine learning expertise, make sure to share them during interviews. Real-world examples can really set you apart from the crowd.
✨Tip Number 3
Prepare for those tricky questions! Brush up on your technical knowledge and be ready to discuss how you've tackled complex problems in the past. Confidence in your answers can make a huge difference.
✨Tip Number 4
Don't forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you're genuinely interested in joining the team!
We think you need these skills to ace Staff Machine Learning Engineer - Safety & Policy in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that align with the Staff Machine Learning Engineer role. Highlight your experience with machine learning systems, especially in safety and policy contexts, to catch our eye!
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about building safe and trustworthy systems. Share specific examples of your work in multimodal machine learning or human-in-the-loop systems to show us what you can bring to the team.
Showcase Your Collaboration Skills: Since this role involves working closely with cross-functional teams, make sure to mention any past experiences where you've successfully collaborated with non-technical partners. We love seeing how you can bridge the gap between tech and policy!
Apply Through Our Website: We encourage you to apply directly through our website for the best chance of getting noticed. It’s the easiest way for us to keep track of your application and ensure it reaches the right people!
How to prepare for a job interview at Creandum
✨Know Your Machine Learning Stuff
Make sure you brush up on your machine learning fundamentals, especially around building and scaling production-grade systems. Be ready to discuss your past projects and how you've tackled challenges in ML evaluation and multimodal models.
✨Understand the Safety & Policy Landscape
Familiarise yourself with content moderation and safety policies, especially in relation to music and media platforms. Being able to articulate how you would approach policy evaluation frameworks will show that you’re aligned with the team’s mission.
✨Prepare for Cross-Functional Collaboration
Since this role involves working closely with Trust & Safety, Legal, and Public Affairs, think of examples where you've successfully collaborated across teams. Highlight your communication skills and how you’ve influenced technical direction in previous roles.
✨Embrace Ambiguity and Problem-Solving
Be ready to discuss how you navigate ambiguous situations and make trade-offs between speed, quality, and risk. Prepare some scenarios where you’ve had to make tough decisions and how you approached them, as this will resonate well with the team’s needs.