At a Glance
- Tasks: Join us to optimise GPU performance for machine learning workloads using low-level C++ programming.
- Company: Be part of a disruptive tech company leading in ML and high-performance GPU computing.
- Benefits: Enjoy health perks, tech stipends, learning opportunities, and adventure days to explore your passions.
- Why this job: Work on groundbreaking projects that shape the future of AI and ML in a creative, collaborative environment.
- Qualifications: Proficiency in C++, GPU architectures, and a passion for machine learning are essential.
- Other info: This role offers a competitive salary up to £175k plus significant bonuses.
The predicted salary is between 105000 - 245000 £ per year.
Location: London, UK
About the Client: We are partnering with an exciting, disruptive technology company at the forefront of machine learning (ML) and high-performance GPU computing. This innovative firm is leveraging cutting-edge GPU technology to optimize machine learning algorithms and computational models, powering the next wave of AI and data-driven applications. Their mission is to drive performance optimization in ML and AI workloads, transforming industries such as autonomous vehicles, healthcare, and immersive gaming experiences. This is a fantastic opportunity for someone passionate about low-level systems programming and ML optimization to be part of a team that is reshaping the future of technology.
The Role: We are seeking a Low-Level C++ Engineer to join their team and work directly on optimizing GPU performance for machine learning (ML) workloads. As part of the ML optimization team, you will be responsible for developing and fine-tuning GPU-level solutions that accelerate machine learning training and inference. This involves working on the GPU hardware, optimizing the underlying C++ code, and pushing the performance of ML algorithms to new heights. Your work will directly contribute to optimizing ML workloads on GPUs, enabling faster, more efficient computation for large-scale data processing and AI model training. If you’re eager to work at the intersection of low-level GPU programming and machine learning, this is the role for you.
Key Responsibilities:
- Develop and optimize low-level C++ code for GPU hardware to accelerate machine learning workloads.
- Work closely with ML engineers to implement GPU-level optimizations for ML model training and inference, focusing on speed and efficiency.
- Profile and optimize ML workloads running on GPUs, focusing on memory management, parallelization, and performance tuning.
- Develop and optimize custom GPU drivers and frameworks for ML-specific tasks, including model training, AI inference, and data preprocessing.
- Collaborate with data scientists and researchers to integrate new machine learning algorithms and enhance their GPU acceleration.
- Stay up to date with the latest GPU architecture and machine learning advancements, applying new techniques to optimize system performance.
Skills and Experience:
- Proficiency in C++ with a strong focus on memory management, multi-threading, and low-level performance optimizations.
- Experience with GPU architectures (e.g., NVIDIA, AMD) and programming frameworks like CUDA, OpenCL, and TensorFlow.
- Understanding of machine learning algorithms, including model training and inference, and how to optimize these for GPU-based computation.
- Strong knowledge of parallel computing, vectorization, and multi-core systems for high-performance computing (HPC).
- Experience with profiling tools (e.g., NVIDIA Nsight, gdb, perf) and performance tuning in a GPU environment.
- Experience working with deep learning frameworks (e.g., TensorFlow, PyTorch) or similar ML frameworks is a plus.
- Strong problem-solving skills and a keen interest in optimizing systems for ML workloads.
- A passion for machine learning, AI, and innovative technology.
Nice to Have:
- Experience with high-performance computing (HPC) and large-scale distributed systems.
- Knowledge of AI/ML libraries such as cuDNN, TensorRT, or other GPU-accelerated libraries.
- Familiarity with low-level debugging tools and profiling techniques for performance tuning of machine learning models.
- Exposure to system-level programming on Linux or similar environments.
Benefits:
- Comprehensive Health & Wellness Package: From mental health support to personalized fitness programs and wellness retreats.
- Tech Upgrade Stipend: Receive an annual allowance to upgrade your personal tech setup, whether it's a new laptop, monitor, or VR headset.
- Learning & Development: Access exclusive technical courses, mentorship opportunities, and industry conferences.
- Innovation Days: Enjoy quarterly "Innovation Days" to explore personal projects, experiment with new technologies, or learn something new.
- Adventure Days: Take one paid day each quarter to engage in an activity that excites you — whether it’s exploring London’s best hidden spots or trying a new hobby.
- Wellness Perks: Enjoy unlimited access to the gym, yoga studio, and wellness retreat days. Plus, mental health days are encouraged and supported.
Why This Role? This is a unique opportunity to work at the cutting edge of machine learning optimization on GPUs. You’ll be part of an innovative team working on groundbreaking projects, directly influencing the future of AI and ML technologies. With an environment that encourages creativity and collaboration, this role offers a perfect balance between technical challenge and personal growth. If you're passionate about optimizing machine learning models, accelerating data-driven technologies, and working with the latest GPU hardware, we want to hear from you.
C++ Engineer (Low-Level) - up to £175k base + HUGE bonus employer: Hunter Bond
Contact Detail:
Hunter Bond Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land C++ Engineer (Low-Level) - up to £175k base + HUGE bonus
✨Tip Number 1
Familiarise yourself with the latest GPU architectures and their specific optimisations. Understanding how different GPUs handle machine learning tasks can give you a significant edge in interviews, as you'll be able to discuss relevant performance tuning techniques.
✨Tip Number 2
Engage with the machine learning community by attending meetups or webinars focused on GPU programming and optimisation. Networking with professionals in the field can provide insights into current trends and may even lead to referrals for job opportunities.
✨Tip Number 3
Showcase your practical experience with GPU programming frameworks like CUDA or OpenCL through personal projects or contributions to open-source initiatives. Having tangible examples of your work can make you stand out during the interview process.
✨Tip Number 4
Prepare to discuss specific challenges you've faced in optimising C++ code for GPU workloads. Being able to articulate your problem-solving approach and the results achieved will demonstrate your expertise and passion for the role.
We think you need these skills to ace C++ Engineer (Low-Level) - up to £175k base + HUGE bonus
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your proficiency in C++ and experience with GPU architectures. Include specific projects or roles where you've optimised ML workloads or worked with CUDA, OpenCL, or TensorFlow.
Craft a Compelling Cover Letter: In your cover letter, express your passion for machine learning and low-level programming. Mention how your skills align with the company's mission to optimise AI and ML technologies, and provide examples of relevant work.
Showcase Relevant Projects: If you have personal or professional projects that demonstrate your ability to optimise GPU performance or work with ML algorithms, include these in your application. This can set you apart from other candidates.
Highlight Continuous Learning: Mention any recent courses, certifications, or workshops related to GPU programming, machine learning, or high-performance computing. This shows your commitment to staying updated with industry advancements.
How to prepare for a job interview at Hunter Bond
✨Showcase Your C++ Expertise
Be prepared to discuss your experience with C++, especially in low-level programming. Highlight specific projects where you optimised performance, focusing on memory management and multi-threading.
✨Demonstrate GPU Knowledge
Familiarise yourself with different GPU architectures like NVIDIA and AMD. Be ready to explain how you've used frameworks such as CUDA or OpenCL in past projects to enhance machine learning workloads.
✨Discuss Machine Learning Algorithms
Understand the fundamentals of machine learning algorithms and be able to articulate how you would optimise them for GPU computation. Prepare examples of how you've applied these concepts in real-world scenarios.
✨Prepare for Technical Questions
Expect technical questions related to profiling tools and performance tuning in a GPU environment. Brush up on tools like NVIDIA Nsight and be ready to discuss how you've used them to improve system performance.