AI Research Engineer (Model Serving & Inference)
AI Research Engineer (Model Serving & Inference)

AI Research Engineer (Model Serving & Inference)

London Full-Time No home office possible
Go Premium
T

Join Tether and Shape the Future of Digital Finance

At Tether, we\’re pioneering a global financial revolution with innovative blockchain solutions that enable seamless digital token transactions worldwide. Our products include the trusted stablecoin USDT, energy-efficient Bitcoin mining solutions, advanced data sharing apps like KEET, and educational initiatives to democratize digital knowledge.

Why join us? Our remote, global team is passionate about fintech innovation. We seek individuals with excellent English communication skills eager to contribute to cutting-edge projects in a fast-growing industry.

About the job:

As part of our AI model team, you will innovate in model serving and inference architectures for advanced AI systems. Your focus will be on optimizing deployment strategies to ensure high responsiveness, efficiency, and scalability across various applications and hardware environments.

Responsibilities:

  1. Design and deploy high-performance, resource-efficient model serving architectures adaptable to diverse environments.
  2. Establish and track performance metrics like latency, throughput, and memory usage.
  3. Develop and monitor inference tests, analyze results, and validate performance improvements.
  4. Prepare realistic datasets and scenarios to evaluate model performance in low-resource settings.
  5. Identify bottlenecks and optimize serving pipelines for scalability and reliability.
  6. Collaborate with teams to integrate optimized frameworks into production, ensuring continuous improvement.

Qualifications:

  • Degree in Computer Science or related field; PhD preferred in NLP, Machine Learning, with a strong publication record.
  • Proven experience in low-level kernel and inference optimizations on mobile devices, with measurable improvements.
  • Deep understanding of model serving architectures, optimization techniques, and memory management in resource-constrained environments.
  • Expertise in CPU/GPU kernel development for mobile platforms and deploying inference pipelines on such devices.
  • Ability to apply empirical research to overcome latency, bottleneck, and memory challenges, with experience in evaluation frameworks and iterative optimization.

#J-18808-Ljbffr

T

Contact Detail:

Tether Operations Limited Recruiting Team

AI Research Engineer (Model Serving & Inference)
Tether Operations Limited
Go Premium

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

T
Similar positions in other companies
UK’s top job board for Gen Z
discover-jobs-cta
Discover now
>