At a Glance
- Tasks: Join us to build and optimise AI inference engines using Rust, Python, and cutting-edge GPU technologies.
- Company: Dynamic tech company focused on innovative AI solutions and collaborative teamwork.
- Benefits: Competitive salary, equity options, remote work flexibility, and opportunities for professional growth.
- Other info: Fast-paced environment with exciting challenges and excellent career advancement potential.
- Why this job: Make a real impact in AI by working on high-performance systems and modern LLM architectures.
- Qualifications: 3+ years in software engineering with expertise in ML inference and GPU programming.
The predicted salary is between 70000 - 90000 £ per year.
We are looking for an AI Inference Engineer to join our growing team. We build and run the inference engine behind every Perplexity query and deploy dozens of model architectures at scale with tight latency and cost budgets. Our stack is Rust, Python, CUDA, and CuTe DSL.
Responsibilities
- New models support. Support transformer-based retrieval, text-generation, and multimodal models in our inference infrastructure, from weight loading, request scheduling and KV-cache management to support in API Gateway.
- GPU kernels migration to CuTe DSL. Port our in-house CUDA kernels to NVIDIA's CuTe DSL so they run on GB200 today and are portable to Vera Rubin racks tomorrow.
- Rust-native serving runtime. Develop our internal Rust-based inference server to solve all Python pains and keep up with rapidly growing traffic.
- Performance optimisation. Profile and fix bottlenecks from network ingress through continuous batching and GPU kernels interleaving.
- Reliability and observability. Build dashboards, alerts, and automated remediation so we catch regressions before users do. Respond to and learn from production incidents.
Who We’re Looking For
- Deep experience with GPU programming and performance work (CUDA, Triton, CUTLASS, or similar). Any other deep systems programming experience is a plus.
- You understand modern LLM architectures and are able to bring them up reliably in a production environment.
- You’ve built and operated production distributed systems under real load - ideally performance-critical ones.
- Comfortable working across languages and layers: Rust for the serving runtime, Python for model code, CUDA/CuteDSL for kernels.
- You own problems end-to-end. You can read a research paper on Monday, write a kernel on Wednesday, and debug a production incident on Friday.
- Self-directed. You do well in fast-moving environments where the path forward isn’t laid out for you.
Nice-to-have
- ML compilers and framework internals: PyTorch internals, torch.compile, custom operators.
- Distributed GPU communication: NCCL, NVLink, InfiniBand, RDMA libraries, model/tensor parallelism.
- Low-precision inference: INT8/FP8/FP4 quantization, mixed-precision serving.
- Profiling and debugging tools: Nsight Compute/Systems, CUDA-GDB, PTX/SASS analysis.
- Container orchestration: Kubernetes, GPU scheduling, autoscaling inference workloads.
Qualifications
- 3+ years of professional software engineering experience with meaningful work on ML inference or high-performance systems.
- Familiarity with at least one deep learning framework (PyTorch, JAX, TensorFlow).
- Understanding of GPU architectures (memory hierarchy, warp scheduling, tensor cores).
- Understanding of common LLM architectures and inference optimization techniques (e.g. quantization, speculative decoding, prefill-decode disaggregation).
Final offer amounts are determined by multiple factors including experience and expertise. Equity: In addition to the base salary, equity may be part of the total compensation package.
AI Inference Engineer | GPU-Scale Rust/Python | Equity in London employer: Perplexity
Contact Detail:
Perplexity Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AI Inference Engineer | GPU-Scale Rust/Python | Equity in London
✨Tip Number 1
Network, network, network! Get in touch with folks in the industry, attend meetups, and join online forums. The more connections we make, the better our chances of landing that AI Inference Engineer role.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects, especially those involving GPU programming or ML inference. This gives us a chance to demonstrate our expertise beyond just a CV.
✨Tip Number 3
Prepare for technical interviews by brushing up on relevant concepts like CUDA, Rust, and performance optimisation techniques. We should practice coding challenges and system design questions to feel confident when it’s our turn to shine.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure our application gets seen by the right people. Plus, we can tailor our approach based on the specific needs outlined in the job description.
We think you need these skills to ace AI Inference Engineer | GPU-Scale Rust/Python | Equity in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with GPU programming and performance work. We want to see how your skills align with our tech stack, so don’t be shy about showcasing your Rust, Python, and CUDA expertise!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Tell us why you’re excited about the AI Inference Engineer role and how your background makes you a perfect fit. We love seeing passion and personality, so let it show!
Showcase Your Projects: If you've worked on any relevant projects, especially those involving distributed systems or ML inference, make sure to mention them. We’re keen to see real-world applications of your skills, so include links or descriptions of your work.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy – just follow the prompts!
How to prepare for a job interview at Perplexity
✨Know Your Tech Stack
Make sure you’re well-versed in Rust, Python, CUDA, and CuTe DSL. Brush up on your knowledge of GPU programming and performance optimisation techniques. Being able to discuss how you've used these technologies in past projects will show that you're not just familiar with them, but that you can apply them effectively.
✨Demonstrate Problem Ownership
Be ready to share examples of how you've taken ownership of complex problems from start to finish. Whether it’s reading a research paper, writing a kernel, or debugging a production incident, illustrate your ability to tackle challenges head-on. This will resonate well with the team’s expectation for self-directed individuals.
✨Showcase Your Experience with Distributed Systems
Highlight any experience you have with building and operating production distributed systems, especially under load. Discuss specific instances where you’ve optimised performance or resolved bottlenecks. This will demonstrate your capability to handle the demands of their inference infrastructure.
✨Prepare for Technical Questions
Expect technical questions that dive deep into modern LLM architectures and inference optimisation techniques. Brush up on topics like quantization and profiling tools. Practising coding problems related to GPU programming can also give you an edge during the interview.