At a Glance
- Tasks: Design and build scalable data pipelines and internal tools for large-scale data processing.
- Company: Join a global tech leader focused on innovation and collaboration.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Why this job: Make an impact by developing cutting-edge solutions that enhance data workflows.
- Qualifications: Strong programming skills in Python and experience with distributed systems.
- Other info: Dynamic team environment with a focus on engineering excellence and mentorship.
The predicted salary is between 36000 - 60000 £ per year.
We are seeking an experienced Infrastructure Lead / Senior Software Engineer to help our client, a global multinational technology company, build and scale the internal systems that power large-scale data processing and operational workflows. In this role, you will design and operate high-throughput data pipelines, distributed services, and internal tools that support teams working with complex datasets and high-volume task processing.
What You’ll Do:
- Build Scalable Data Pipelines: Design and implement high-volume data processing pipelines that ingest, transform, and distribute large datasets. Develop systems that can prioritise, queue, and process work efficiently at scale. Ensure pipelines are reproducible, maintainable, and resilient to failure.
- Design Platform Services: Build modular backend services and platform components that support evolving data workflows. Develop storage abstraction layers that allow services to interact with multiple underlying data systems. Implement mechanisms to ensure data consistency, deduplication, and integrity across distributed workflows.
- Operate Production Infrastructure: Deploy and manage containerised workloads and batch processing jobs. Improve system reliability through fault tolerance, coordination mechanisms, and automated recovery processes. Work with infrastructure and security teams to ensure secure service communication and controlled data access.
- Build Internal Tools & Automation: Develop tools that allow operational teams to ingest, manage, and analyse structured and unstructured datasets. Improve workflow efficiency by automating repetitive processes and enabling self-service data operations. Integrate platform services with internal tooling used by analysts and operational teams.
- Drive Engineering Excellence: Establish engineering best practices for code quality, testing, and maintainable system design. Write clear documentation to support onboarding, system understanding, and operational troubleshooting. Implement monitoring and diagnostic tools to provide visibility into pipeline performance and system health.
Technical Skills:
- Strong programming experience, including a mastery of Python, as well as experience with similar backend languages.
- Experience building distributed systems or large-scale data pipelines.
- Solid knowledge of SQL and working with large datasets.
- Experience operating containerised infrastructure or production data platforms.
- Familiarity with service-based architectures and API-driven systems.
Experience:
- Several years of experience building backend infrastructure, data platforms, or internal tooling.
- Experience operating production systems handling large volumes of data or tasks.
- Comfortable collaborating across engineering and operational teams.
Nice to Have:
- Experience building internal platforms used by analysts, operations teams, or data specialists.
- Exposure to automation systems or AI-assisted tooling.
- Experience improving developer productivity or operational workflows through tooling.
- Background mentoring engineers or leading infrastructure initiatives.
Resources Global Professionals is a Worldwide Consulting Firm. Our model is unique and tailored to meet our consultants’ professional and personal goals.
Infrastructure Lead / Senior Software Engineer - Data Platforms & Internal Tooling employer: RGP
Contact Detail:
RGP Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Infrastructure Lead / Senior Software Engineer - Data Platforms & Internal Tooling
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at meetups. We all know that sometimes it’s not just what you know, but who you know that can help you land that dream job.
✨Tip Number 2
Show off your skills! Create a portfolio or GitHub repository showcasing your projects and contributions. We want to see what you can do, so make it easy for potential employers to check out your work.
✨Tip Number 3
Prepare for interviews by practising common questions and scenarios related to data platforms and infrastructure. We recommend doing mock interviews with friends or using online platforms to boost your confidence.
✨Tip Number 4
Apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive about their job search!
We think you need these skills to ace Infrastructure Lead / Senior Software Engineer - Data Platforms & Internal Tooling
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with data pipelines and backend services. We want to see how your skills match the job description, so don’t be shy about showcasing your relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about building scalable systems and how your background makes you a perfect fit for this role. Let us know what excites you about working with large datasets!
Showcase Your Technical Skills: Don’t forget to mention your programming prowess, especially in Python and SQL. We love seeing examples of your work with distributed systems or containerised infrastructure, so include any relevant projects or experiences!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates during the process!
How to prepare for a job interview at RGP
✨Know Your Tech Inside Out
Make sure you brush up on your programming skills, especially in Python and SQL. Be ready to discuss your experience with building distributed systems and large-scale data pipelines. Prepare examples of how you've designed and operated high-throughput data pipelines in the past.
✨Showcase Your Problem-Solving Skills
Be prepared to tackle hypothetical scenarios during the interview. Think about how you would approach designing modular backend services or improving system reliability. Use the STAR method (Situation, Task, Action, Result) to structure your answers and highlight your problem-solving abilities.
✨Demonstrate Collaboration Experience
Since this role involves working closely with engineering and operational teams, be ready to share examples of successful collaborations. Talk about how you've integrated platform services with internal tools or improved workflows through automation. Highlight your communication skills and ability to work cross-functionally.
✨Prepare Questions for Them
Interviews are a two-way street! Prepare insightful questions about their current data platforms, challenges they face, or their engineering best practices. This shows your genuine interest in the role and helps you assess if the company is the right fit for you.