AI Research Engineer, Model Optimization and Inference in London

AI Research Engineer, Model Optimization and Inference in London

London Full-Time 60000 - 80000 € / year (est.) Home office (partial)
I

At a Glance

  • Tasks: Architect and build high-performance inference engines for digital actors in real-time entertainment.
  • Company: Join Iconic, a pioneering company at the intersection of AI, art, and storytelling.
  • Benefits: Competitive salary, equity, 25 days leave, private healthcare, and hybrid work options.
  • Other info: Be part of a friendly, inclusive culture with great career growth opportunities.
  • Why this job: Shape the future of interactive entertainment with cutting-edge AI technology.
  • Qualifications: MSc or PhD in Computer Science or related field, strong model optimization experience.

The predicted salary is between 60000 - 80000 € per year.

The Mission

At Iconic, our virtual actors don't just generate “text” or “actions”—they perform. They need to speak, move, and perceive in milliseconds, often running locally on a player's machine alongside a rendering engine. You will bridge the gap between massive research models and the constraints of real-time interactive entertainment.

The Role

You will architect and build the inference engine that powers our digital entities. Your main task will be tearing apart the model architecture to make it run as fast as possible on consumer hardware while keeping their abilities intact for the intended usage. As part of a small, focused team, you'll have significant autonomy and end-to-end ownership. You will work at the intersection of System ML and Game Tech. You might spend one day implementing a custom pruning algorithm for our TTS model, and the next day writing a C++ wrapper to expose that model to our game engine. You will work closely with our Character Research team to ensure that optimization never comes at the cost of the character's soul.

Key Responsibilities

  • Architect Low-Latency Runtimes: Build and maintain high-performance inference pipelines for Multimodal LLMs, TTS, and Vision models, targeting both server-side (H100/A100) and consumer edge (RTX 5090, Apple Silicon) environments.
  • State-of-the-Art Optimization: Implement advanced techniques like Speculative Decoding, KV-Cache quantization, PagedAttention, and Layer Pruning to minimize Time-To-First-Token (TTFT) and Time-Per-Output-Token (TPOT), maximizing throughput.
  • Model Compression: Lead our efforts in post-training quantization (AWQ, GPTQ, GGUF) and distillation to fit massive models into consumer VRAM budgets.
  • Engine Integration: Collaborate with the game engineering team to ensure thread-safe, non-blocking asynchronous inference within the game loop.
  • Custom Kernel Development: Write custom ops in CUDA, Triton, or Metal when off-the-shelf kernels aren't fast enough.

Requirements

  • MSc or PhD in Computer Science, Machine Learning, or a related field (or equivalent industry experience).
  • Strong experience with model optimization techniques (quantization, pruning, distillation, knowledge transfer).
  • Experience with LLM-specific inference optimizations (KV-cache management, speculative decoding, attention mechanisms).
  • Proficiency in C/C++.
  • Hands-on experience deploying ML models on-device or in latency-sensitive environments.
  • Proficiency in Python and deep learning frameworks (PyTorch, JAX, or TensorFlow).
  • Experience with inference optimization tools and runtimes (TensorRT, ONNX Runtime, Core ML, or similar).
  • Strong systems and engineering skills.
  • Excellent collaboration and communication skills.

Nice to Have

  • Experience with On-Device AI stacks: ExecuTorch, CoreML, MLX, or ONNX Runtime.
  • Experience in CUDA programming.
  • Familiarity with non-NVIDIA compute (AMD/ROCm, DirectML, Vulkan Compute).
  • Background in real-time systems or game engines (Unreal, Unity) or Real-Time Rendering.
  • Publications or demonstrated work in efficient ML or model compression (NeurIPS, ICML, MLSys, etc.) or open-source contributions to projects like vLLM, SGLang, llama.cpp, or bitsandbytes.

Why Join Us

Be a foundational member of a team innovating at the intersection of AI, art, and storytelling. You'll help shape the research direction, culture, and technical foundations of a company building toward something genuinely new.

What we offer:

  • Competitive salary and equity compensation.
  • 25 days annual leave + bank holidays.
  • Private healthcare.
  • Based in London with hybrid work.
  • Inclusive & friendly company culture with socials and game breaks.

AI Research Engineer, Model Optimization and Inference in London employer: Iconic

At Iconic, we pride ourselves on being an exceptional employer, offering a unique opportunity for AI Research Engineers to work at the cutting edge of technology in the vibrant city of London. Our inclusive and friendly culture fosters collaboration and creativity, while our commitment to employee growth is reflected in competitive salaries, equity compensation, and generous annual leave. Join us to be part of a pioneering team that values innovation and provides the autonomy to shape the future of interactive entertainment.

I

Contact Detail:

Iconic Recruiting Team

StudySmarter Expert Advice🤫

We think this is how you could land AI Research Engineer, Model Optimization and Inference in London

Tip Number 1

Network like a pro! Reach out to folks in the industry, attend meetups, and connect with people on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.

Tip Number 2

Show off your skills! Create a portfolio showcasing your projects, especially those related to model optimization and inference. This will give potential employers a taste of what you can do and set you apart from the crowd.

Tip Number 3

Prepare for interviews by brushing up on your technical knowledge and problem-solving skills. Practice coding challenges and be ready to discuss your past experiences with model compression and real-time systems.

Tip Number 4

Don't forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are genuinely interested in joining our team.

We think you need these skills to ace AI Research Engineer, Model Optimization and Inference in London

Model Optimization Techniques
Quantization
Pruning
Distillation
Knowledge Transfer
LLM-specific Inference Optimizations
KV-cache Management

Some tips for your application 🫡

Tailor Your CV:Make sure your CV reflects the skills and experiences that match the job description. Highlight your expertise in model optimization techniques and any relevant projects you've worked on. We want to see how you can bridge the gap between research models and real-time performance!

Craft a Compelling Cover Letter:Your cover letter is your chance to show us your passion for AI and game tech. Share specific examples of your work, especially those involving low-latency runtimes or custom kernel development. Let your personality shine through while keeping it professional!

Showcase Your Projects:If you've got any personal projects or contributions to open-source that relate to model compression or inference optimization, make sure to include them! We love seeing practical applications of your skills, so don’t hold back on sharing your achievements.

Apply Through Our Website:We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands. Plus, it shows us you're genuinely interested in joining our team at Iconic!

How to prepare for a job interview at Iconic

Know Your Models Inside Out

Make sure you’re well-versed in the latest model optimization techniques like quantization and pruning. Be ready to discuss how these can be applied to real-time systems, especially in gaming contexts. Brush up on your knowledge of LLM-specific inference optimizations too!

Show Off Your Coding Skills

Since proficiency in C/C++ is a must, practice coding problems that involve writing efficient algorithms. You might even want to prepare a small project or two that showcases your ability to implement custom ops in CUDA or similar languages.

Understand the Game Engine Integration

Familiarise yourself with how inference engines work within game loops. Be prepared to discuss your experience with game engines like Unreal or Unity, and how you’ve ensured thread-safe, non-blocking asynchronous inference in past projects.

Communicate Your Ideas Clearly

Strong collaboration and communication skills are key. Practice explaining complex technical concepts in simple terms, as you’ll need to work closely with both the Character Research team and game engineers. Think about examples where you’ve successfully communicated technical ideas in a team setting.