At a Glance
- Tasks: Refactor Spark jobs, optimise performance, and integrate with cloud platforms.
- Company: Join a global leader in consulting and digital transformation.
- Benefits: Enjoy remote work flexibility and competitive pay.
- Why this job: Be part of innovative projects that shape the future of cloud technology.
- Qualifications: Experience with Apache Spark, cloud environments, and containerisation tools required.
- Other info: This is a 9-month contract role with potential for growth.
The predicted salary is between 48000 - 72000 £ per year.
Role Title: Infrastructure/ Platform Engineer – Apache
Duration: 9 Months
Location: Remote
Rate: £ – Umbrella only
Would you like to join a global leader in consulting, technology services and digital transformation?
Our client is at the forefront of innovation to address the entire breadth of opportunities in the evolving world of cloud, digital and platforms.
Role purpose / summary
? Refactor prototype Spark jobs into production-quality components, ensuring scalability, test coverage, and integration readiness.
? Package Spark workloads for deployment via Docker/Kubernetes and integrate with orchestration systems (e.g., Airflow, custom schedulers).
? Work with platform engineers to embed Spark jobs into InfoSum\’s platform APIs and data pipelines.
? Troubleshoot job failures, memory and resource issues, and execution anomalies across various runtime environments.
? Optimize Spark job performance and advise on best practices to reduce cloud compute and storage costs.
? Guide engineering teams on choosing the right execution strategies across AWS, GCP, and Azure.
? Provide subject matter expertise on using AWS Glue for ETL workloads and integration with S3 and other AWS-native services.
? Implement observability tooling for logs, metrics, and error handling to support monitoring and incident response.
? Align implementations with InfoSum\’s privacy, security, and compliance practices.
Required Skills and Experience:
? Proven experience with Apache Spark (Scala, Java, or PySpark), including performance optimization and advanced tuning techniques.
? Strong troubleshooting skills in production Spark environments, including diagnosing memory usage, shuffles, skew, and executor behavior.
? Experience deploying and managing Spark jobs in at least two major cloud environments (AWS, GCP, Azure).
? In-depth knowledge of AWS Glue, including job authoring, triggers, and cost-aware configuration.
? Familiarity with distributed data formats (Parquet, Avro), data lakes (Iceberg, Delta Lake), and cloud storage systems (S3, GCS, Azure Blob).
? Hands-on experience with Docker, Kubernetes, and CI/CD pipelines.
? Strong documentation and communication skills, with the ability to support and coach internal teams.
Key Indicators of Success:
? Spark jobs are performant, fault-tolerant, and integrated into InfoSum\’s platform with minimal overhead.
? Cost of running data processing workloads is optimized across cloud environments.
? Engineering teams are equipped with best practices for writing, deploying, and monitoring Spark workloads.
? Operational issues are rapidly identified and resolved, with root causes clearly documented.
? Work is delivered with a high level of independence, reliability, and professionalism
All profiles will be reviewed against the required skills and experience. Due to the high number of applications we will only be able to respond to successful applicants in the first instance. We thank you for your interest and the time taken to apply!
#J-18808-Ljbffr
Infrastructure/ Platform Engineer Apache employer: Experis - ManpowerGroup
Contact Detail:
Experis - ManpowerGroup Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Infrastructure/ Platform Engineer Apache
✨Tip Number 1
Familiarise yourself with the specific technologies mentioned in the job description, such as Apache Spark, Docker, and Kubernetes. Having hands-on experience or projects that showcase your skills in these areas will make you stand out.
✨Tip Number 2
Network with professionals in the field of cloud computing and data engineering. Engaging with communities on platforms like LinkedIn or relevant forums can provide insights and potentially lead to referrals.
✨Tip Number 3
Prepare to discuss real-world scenarios where you've optimised Spark jobs or resolved production issues. Being able to articulate your problem-solving process will demonstrate your expertise and readiness for the role.
✨Tip Number 4
Stay updated on the latest trends and best practices in cloud environments, especially AWS, GCP, and Azure. Showing that you're proactive about learning can impress potential employers and show your commitment to the field.
We think you need these skills to ace Infrastructure/ Platform Engineer Apache
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Apache Spark, Docker, Kubernetes, and cloud environments. Use specific examples that demonstrate your troubleshooting skills and performance optimisation techniques.
Craft a Compelling Cover Letter: In your cover letter, explain why you are interested in the Infrastructure/Platform Engineer role. Mention your relevant skills and experiences, particularly those related to AWS Glue and data processing workloads, and how they align with the company's goals.
Showcase Relevant Projects: If you have worked on projects involving Spark jobs, Docker, or cloud services, include these in your application. Describe your role, the challenges faced, and the outcomes achieved to demonstrate your expertise.
Proofread Your Application: Before submitting, carefully proofread your CV and cover letter for any spelling or grammatical errors. A polished application reflects your attention to detail and professionalism, which is crucial for this role.
How to prepare for a job interview at Experis - ManpowerGroup
✨Showcase Your Technical Skills
Be prepared to discuss your experience with Apache Spark, including specific projects where you've optimised performance or resolved issues. Highlight your familiarity with Scala, Java, or PySpark, and be ready to explain advanced tuning techniques you've employed.
✨Demonstrate Cloud Expertise
Since the role involves deploying Spark jobs in cloud environments, make sure to articulate your experience with AWS, GCP, or Azure. Discuss any specific challenges you've faced in these environments and how you overcame them.
✨Prepare for Problem-Solving Questions
Expect to tackle troubleshooting scenarios during the interview. Be ready to walk through your thought process on diagnosing memory usage, shuffles, and executor behaviour in production Spark environments.
✨Communicate Clearly and Effectively
Strong documentation and communication skills are essential for this role. Practice explaining complex technical concepts in a clear and concise manner, as you'll need to support and coach internal teams.