At a Glance
- Tasks: Design and maintain data pipelines using Hadoop technologies in a hybrid work environment.
- Company: Join a forward-thinking company focused on enhancing operational data platforms.
- Benefits: Enjoy hybrid working, competitive salary, and opportunities for professional growth.
- Other info: Work in a collaborative environment with a focus on security and compliance.
- Why this job: Be part of a dynamic team driving innovation in data engineering and analytics.
- Qualifications: 5+ years in Hadoop and data engineering with strong Python skills required.
The predicted salary is between 48000 - 72000 Β£ per year.
Role Title: Hadoop Engineer / ODP Platform
Location: Birmingham / Sheffield β Hybrid working with 3 days onsite per week
End Date: 28/11/2025
Role Overview:
We are seeking a highly skilled Hadoop Engineer to support and enhance our Operational Data Platform (ODP) deployed in an on-premises environment.
The ideal candidate will have extensive experience in the Hadoop ecosystem, strong programming skills, and a solid understanding of infrastructure-level data analytics. This role focuses on building and maintaining scalable, secure, and high-performance data pipelines within enterprise-grade on-prem systems.
Key Responsibilities:
- Design, develop, and maintain data pipelines using Hadoop technologies in an on-premises infrastructure.
- Build and optimise workflows using Apache Airflow and Spark Streaming for real-time data processing.
- Develop robust data engineering solutions using Python for automation and transformation.
- Collaborate with infrastructure and analytics teams to support operational data use cases.
- Monitor and troubleshoot data jobs, ensuring reliability and performance across the platform.
- Ensure compliance with enterprise security and data governance standards.
Required Skills & Experience:
- Minimum 5 years of experience in Hadoop and data engineering.
- Strong hands-on experience with Python, Apache Airflow, and Spark Streaming.
- Deep understanding of Hadoop components (HDFS, Hive, HBase, YARN) in on-prem environments.
- Exposure to data analytics, preferably involving infrastructure or operational data.
- Experience working with Linux systems, shell scripting, and enterprise-grade deployment tools.
- Familiarity with monitoring and logging tools relevant to on-prem setups.
Preferred Qualifications:
- Experience with enterprise ODP platforms or similar large-scale data systems.
- Knowledge of configuration management tools (e.g., Ansible, Puppet) and CI/CD in on-prem environments.
- Understanding of network and storage architecture in data centers.
- Familiarity with data security, compliance, and audit requirements in regulated industries.
JBRP1_UKTJ
Hadoop Engineer - ODP Platform employer: Experis
Contact Detail:
Experis Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Hadoop Engineer - ODP Platform
β¨Tip Number 1
Make sure to showcase your hands-on experience with Hadoop technologies during networking opportunities. Attend local meetups or online forums related to big data and Hadoop, where you can connect with professionals in the field and discuss your expertise.
β¨Tip Number 2
Familiarise yourself with the specific tools mentioned in the job description, such as Apache Airflow and Spark Streaming. Consider working on personal projects or contributing to open-source projects that utilise these technologies to demonstrate your skills.
β¨Tip Number 3
Engage with our company on social media platforms like LinkedIn. Follow us, interact with our posts, and share relevant content to increase your visibility. This can help you stand out when we review applications.
β¨Tip Number 4
Prepare for potential technical interviews by brushing up on your Python programming skills and understanding of Hadoop components. Practice coding challenges and be ready to discuss your previous projects and how they relate to the role.
We think you need these skills to ace Hadoop Engineer - ODP Platform
Some tips for your application π«‘
Tailor Your CV: Make sure your CV highlights your experience with Hadoop, Python, and data engineering. Use specific examples from your past roles that demonstrate your skills in building data pipelines and working with on-premises infrastructure.
Craft a Strong Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention your relevant experience with Hadoop technologies and how you can contribute to enhancing their Operational Data Platform.
Showcase Relevant Projects: If you have worked on projects involving Apache Airflow, Spark Streaming, or similar technologies, be sure to include these in your application. Describe your role and the impact of your contributions.
Highlight Compliance Knowledge: Since the role requires knowledge of data security and compliance, mention any relevant experience you have in these areas. This could include working with regulated industries or understanding data governance standards.
How to prepare for a job interview at Experis
β¨Showcase Your Hadoop Expertise
Make sure to highlight your extensive experience with the Hadoop ecosystem. Be prepared to discuss specific projects where you've designed, developed, or maintained data pipelines, and how you tackled challenges in an on-premises environment.
β¨Demonstrate Programming Skills
Since strong programming skills are crucial for this role, be ready to talk about your experience with Python, Apache Airflow, and Spark Streaming. Consider preparing examples of how you've used these technologies to automate processes or enhance data workflows.
β¨Understand the Infrastructure
Familiarise yourself with the infrastructure-level data analytics concepts that are relevant to the role. Be prepared to discuss your understanding of Hadoop components like HDFS, Hive, and YARN, and how they fit into the overall architecture of an Operational Data Platform.
β¨Prepare for Collaboration Questions
Collaboration is key in this role, so think about past experiences where you've worked with infrastructure and analytics teams. Be ready to share how you contributed to operational data use cases and how you ensured compliance with security and governance standards.