Senior AI Compute Infrastructure Engineer in London
Senior AI Compute Infrastructure Engineer

Senior AI Compute Infrastructure Engineer in London

London Full-Time 80000 - 100000 £ / year (est.) Home office possible
Kraken

At a Glance

  • Tasks: Build and optimise AI compute infrastructure for cutting-edge crypto applications.
  • Company: Join Kraken, a mission-driven leader in the crypto space.
  • Benefits: Fully remote work, competitive salary, and opportunities for professional growth.
  • Other info: Diverse team culture with a focus on innovation and collaboration.
  • Why this job: Make a real impact in the future of crypto technology and financial freedom.
  • Qualifications: 5+ years in infrastructure engineering with GPU and ML experience.

The predicted salary is between 80000 - 100000 £ per year.

Building the Future of Crypto. Our Krakenites are a world-class team with crypto conviction, united by our desire to discover and unlock the potential of crypto and blockchain technology. What makes us different? Kraken is a mission-focused company rooted in crypto values. As a Krakenite, you’ll join us on our mission to accelerate the global adoption of crypto, so that everyone can achieve financial freedom and inclusion.

For over a decade, Kraken’s focus on our mission and crypto ethos has attracted many of the most talented crypto experts in the world. Before you apply, please read the Kraken Culture page to learn more about our internal culture, values, and mission. We also expect candidates to familiarize themselves with the Kraken app.

As a fully remote company, we have Krakenites in 70+ countries who speak over 50 languages. Krakenites are industry pioneers who develop premium crypto products for experienced traders, institutions, and newcomers to the space. Kraken is committed to industry-leading security, crypto education, and world-class client support through our products like Kraken Pro, Desktop, Wallet, and Kraken Futures.

Become a Krakenite and build the future of crypto!

Proof of work

The team: Kraken is building a dedicated AI Compute and Infrastructure team to power the next generation of model training, inference, evaluation, and experimentation across the exchange. This team sits within engineering leadership and owns the infrastructure layer that lets Kraken run AI workloads with control, speed, reliability, and cost discipline.

The team is responsible for GPU and accelerator infrastructure, cluster operations, scheduling, model serving, observability, capacity planning, and cost-efficient compute at scale. This is the backbone that allows Kraken to train, serve, evaluate, and iterate on AI systems in-house where it matters for privacy, latency, reliability, cost, or product differentiation.

You will join a small, senior, high-impact team working directly with AI/ML researchers, platform engineers, security teams, and product teams. The mandate is simple: make Kraken's AI ambitions real by building compute infrastructure that is fast, dependable, efficient, and production-grade.

The opportunity

  • Own and operate GPU and accelerator clusters used for training, inference, evaluation, and experimentation, including drivers, runtimes, kernels, device plugins, node configuration, scheduling primitives, and workload isolation.
  • Design infrastructure that enables Kraken teams to run models locally on GPUs where it is strategically and economically preferable, reducing unnecessary dependency on external providers and containing compute costs.
  • Build and improve scheduling, orchestration, placement, quota management, and utilization systems across heterogeneous accelerator environments.
  • Optimize inference pipelines for latency, throughput, reliability, memory efficiency, and cost using frameworks such as vLLM, Triton Inference Server, TensorRT, or equivalent serving stacks.
  • Partner with ML engineers and researchers to remove bottlenecks in training, evaluation, batch inference, online inference, deployment, and production debugging workflows.
  • Build observability for GPU utilization, memory pressure, queue depth, saturation, token throughput, request latency, failed workloads, capacity pressure, and spend.
  • Drive reliability, incident response, alerting, runbooks, and post-incident improvements for always-on AI compute infrastructure.
  • Evaluate and integrate new hardware, cloud instance families, specialized accelerators, runtimes, schedulers, and serving frameworks as the AI infrastructure landscape evolves.
  • Build tooling that makes GPU usage visible, accountable, and easier for internal teams to consume without needing to become infrastructure experts.
  • Contribute to long-term architecture decisions that balance performance, cost efficiency, scalability, operational simplicity, and production safety.

Skills you should HODL

  • 5+ years of infrastructure engineering experience, with significant time spent on GPU compute, ML infrastructure, distributed systems, high-performance computing, or large-scale production platforms.
  • Hands-on experience operating GPU clusters or accelerator-backed infrastructure in production or production-like environments, including scheduling, orchestration, utilization monitoring, and cost optimization.
  • Strong systems engineering fundamentals across Linux, networking, storage, containers, Kubernetes, distributed runtimes, and production debugging.
  • Experience with ML serving frameworks such as vLLM, Triton Inference Server, TensorRT, TorchServe, KServe, Ray Serve, or equivalent systems.
  • Proficiency in Python for infrastructure automation, tooling, debugging, integration, and operational workflows.
  • Practical understanding of performance tradeoffs across batching, concurrency, memory usage, GPU utilization, model size, latency, throughput, availability, and cost.
  • Track record of optimizing compute costs while maintaining clear performance, reliability, and availability expectations.
  • Experience building observable systems with useful metrics, logs, traces, dashboards, alerts, and incident workflows.
  • Comfortable working in high-stakes, always-on environments where uptime, throughput, correctness, and operational discipline are critical.
  • Clear communicator who can translate infrastructure tradeoffs for researchers, product teams, platform engineers, security stakeholders, and engineering leadership.

Nice to haves

  • Experience at a frontier AI lab, hyperscaler, high-frequency trading firm, research platform, or high-scale ML organization.
  • Familiarity with custom silicon or specialized accelerators such as TPUs, AWS Trainium, Gaudi, or similar platforms.
  • Background in capacity planning, procurement input, reserved capacity strategy, cloud accelerator economics, or GPU fleet cost management.
  • Experience with distributed training frameworks such as DeepSpeed, Megatron-LM, FSDP, Ray, or equivalent systems.
  • Experience debugging CUDA, NCCL, kernel, driver, runtime, memory, networking, or low-level performance issues.
  • Experience with Rust, C++, Go, CUDA, or other systems languages used for performance-critical infrastructure.
  • Crypto, financial services, trading infrastructure, or security-sensitive production infrastructure experience.

Unless a specific application deadline is stated in the job posting, applications are accepted on an ongoing basis. Please note, applicants are permitted to redact or remove information on their resume that identifies age, date of birth, or dates of attendance at or graduation from an educational institution.

We consider qualified applicants with criminal histories for employment on our team, assessing candidates in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance. Kraken is powered by people from around the world and we celebrate all Krakenites for their diverse talents, backgrounds, contributions and unique perspectives. We hire strictly based on merit, meaning we seek out the candidates with the right abilities, knowledge, and skills considered the most suitable for the job. We encourage you to apply for roles where you don't fully meet the listed requirements, especially if you're passionate or knowledgeable about crypto!

We may ask candidates to complete job-related skills or work-style assessments as part of our hiring process. These assessments are designed to evaluate competencies relevant to the role and are applied consistently across candidates for similar positions. Assessment results are considered alongside other relevant information, such as experience and interviews, and are not the sole basis for any employment decision.

As an equal opportunity employer, we don’t tolerate discrimination or harassment of any kind. Whether that’s based on race, ethnicity, age, gender identity, citizenship, religion, sexual orientation, disability, pregnancy, veteran status or any other protected characteristic as outlined by federal, state or local laws.

Senior AI Compute Infrastructure Engineer in London employer: Kraken

At Kraken, we pride ourselves on being a mission-driven company that champions the values of crypto and blockchain technology. As a fully remote employer with a diverse team spread across 70+ countries, we foster a collaborative and inclusive work culture that prioritises employee growth and innovation. Joining us as a Senior AI Compute Infrastructure Engineer means you'll be part of a high-impact team dedicated to building cutting-edge infrastructure, with ample opportunities for professional development and the chance to contribute to the future of financial freedom and inclusion.
Kraken

Contact Detail:

Kraken Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Senior AI Compute Infrastructure Engineer in London

✨Tip Number 1

Get to know Kraken inside out! Familiarise yourself with the Kraken app and our culture page. This will not only help you understand our mission but also show us that you're genuinely interested in being a part of the team.

✨Tip Number 2

Network like a pro! Connect with current Krakenites on LinkedIn or Twitter. Engaging with our community can give you insights into our work culture and might even lead to a referral, which can boost your chances of landing that interview.

✨Tip Number 3

Prepare for the technical side! Brush up on your skills related to GPU compute and ML infrastructure. We love candidates who can demonstrate their hands-on experience and problem-solving abilities during interviews.

✨Tip Number 4

Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows us that you’re proactive and serious about joining the Kraken family.

We think you need these skills to ace Senior AI Compute Infrastructure Engineer in London

Infrastructure Engineering
GPU Compute
ML Infrastructure
Distributed Systems
High-Performance Computing
Production Platforms
Scheduling and Orchestration
Utilization Monitoring
Cost Optimization
Linux Systems Engineering
Networking
Containers
Kubernetes
Python for Automation
ML Serving Frameworks

Some tips for your application 🫡

Know Your Stuff: Before you start writing, make sure you understand the role and what we’re all about at Kraken. Familiarise yourself with our culture and values, and don’t forget to check out the Kraken app. This will help you tailor your application to show how you fit in with our mission.

Be Authentic: We love seeing your personality shine through in your application! Don’t be afraid to share your passion for crypto and how your experiences align with our goals. Authenticity goes a long way in making your application stand out.

Showcase Your Skills: Highlight your relevant experience and skills that match the job description. Be specific about your hands-on experience with GPU clusters, ML infrastructure, or any other tech that’s crucial for the role. We want to see how you can contribute to our team!

Apply Through Our Website: When you’re ready to submit your application, make sure to do it through our website. It’s the best way to ensure your application gets into the right hands. Plus, it shows you’re serious about joining our Krakenite family!

How to prepare for a job interview at Kraken

✨Know Your Stuff

Before the interview, dive deep into the specifics of GPU and AI infrastructure. Familiarise yourself with the tools and frameworks mentioned in the job description, like Triton Inference Server and TensorRT. Being able to discuss these technologies confidently will show that you're not just a candidate, but a potential team member who understands the landscape.

✨Show Your Problem-Solving Skills

Prepare to discuss past experiences where you tackled complex infrastructure challenges. Think about specific examples where you optimised compute costs or improved system reliability. This will demonstrate your hands-on experience and ability to think critically under pressure, which is crucial for a role at Kraken.

✨Embrace the Culture

Kraken values its mission and culture, so make sure you understand their ethos around crypto and financial freedom. Be ready to share why you’re passionate about crypto and how you align with their values. This connection can set you apart from other candidates who may focus solely on technical skills.

✨Ask Insightful Questions

Prepare thoughtful questions that show your interest in the role and the company. Inquire about the team's current projects, challenges they face, or how they measure success in AI infrastructure. This not only shows your enthusiasm but also helps you gauge if Kraken is the right fit for you.

Senior AI Compute Infrastructure Engineer in London
Kraken
Location: London

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>