At a Glance
- Tasks: Design and maintain scalable data pipelines using Hadoop and Apache Spark.
- Company: Join a top global consultancy firm known for innovative big data projects.
- Benefits: Enjoy remote work flexibility with a competitive daily rate of £300 to £350.
- Why this job: Be part of exciting big data initiatives and enhance your technical skills in a dynamic environment.
- Qualifications: Experience with Open Data Platform, Python scripting, and building ETL pipelines required.
- Other info: This is a 6-month contract role, perfect for experienced Hadoop engineers.
The predicted salary is between 60000 - 84000 £ per year.
Advert
Increase your chances of an interview by reading the following overview of this role before making an application.
Hadoop Engineer
6 Months Contract
Remote working
£300 to £350 a day
A top timer global consultancy firm is looking for an experienced Hadoop Engineer to join their team and contribute to large big data projects. The position requires a professional with a strong background in developing and managing scalable data pipelines, specifically using the Hadoop ecosystem and related tools.
The role will focus on designing, building and maintaining scalable data pipelines using big data hadoop ecosystems and apache spark for large datasets. A key responsibility is to analyse infrastructure logs and operational data to derive insights, demonstrating a strong understanding of both data processing and the underlying systems.
The successful candidate should have the following key skills
Experience with Open Data Platform
Hands on experience with Python for Scripting
Apache Spark
Prior experience of building ETL pipelines
Data Modelling
6 Months Contract – Remote Working – £300 to £350 a day Inside IR35
If you are an experienced Hadoop engineer looking for a new role then this is the perfect opportunity for you. If the above seems of interest to you then please apply directly to the AD or send your CV to
Randstad Technologies is acting as an Employment Business in relation to this vacancy
Hadoop Engineer ODP Platform employer: Randstad Technologies Recruitment
Contact Detail:
Randstad Technologies Recruitment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Hadoop Engineer ODP Platform
✨Tip Number 1
Familiarise yourself with the specific tools and technologies mentioned in the job description, such as Hadoop, Apache Spark, and Python. Having hands-on experience with these will not only boost your confidence but also allow you to speak more knowledgeably during any interviews.
✨Tip Number 2
Network with professionals in the big data field, especially those who have experience with the Open Data Platform. Engaging with them on platforms like LinkedIn can provide insights into the role and may even lead to referrals.
✨Tip Number 3
Prepare to discuss your previous projects involving ETL pipelines and data modelling. Be ready to explain your thought process and the challenges you faced, as this will demonstrate your problem-solving skills and practical experience.
✨Tip Number 4
Stay updated on the latest trends and advancements in big data technologies. This knowledge can help you stand out during interviews, showing that you are proactive and genuinely interested in the field.
We think you need these skills to ace Hadoop Engineer ODP Platform
Some tips for your application 🫡
Understand the Role: Before applying, make sure you fully understand the responsibilities and requirements of the Hadoop Engineer position. Familiarise yourself with the Hadoop ecosystem, Apache Spark, and the specific skills mentioned in the job description.
Tailor Your CV: Customise your CV to highlight your experience with Hadoop, Python scripting, and building ETL pipelines. Use specific examples from your past work that demonstrate your ability to manage scalable data pipelines and analyse operational data.
Craft a Compelling Cover Letter: Write a cover letter that not only outlines your qualifications but also expresses your enthusiasm for the role. Mention how your skills align with the company's needs and your interest in contributing to large big data projects.
Proofread Your Application: Before submitting, carefully proofread your CV and cover letter for any spelling or grammatical errors. A polished application reflects your attention to detail and professionalism, which is crucial for a technical role like this.
How to prepare for a job interview at Randstad Technologies Recruitment
✨Showcase Your Technical Skills
Be prepared to discuss your experience with the Hadoop ecosystem and Apache Spark in detail. Highlight specific projects where you've developed and managed scalable data pipelines, and be ready to explain the challenges you faced and how you overcame them.
✨Demonstrate Problem-Solving Abilities
Since the role involves analysing infrastructure logs and operational data, be ready to present examples of how you've derived insights from data in previous roles. Discuss your approach to troubleshooting and optimising data processing systems.
✨Familiarise Yourself with Open Data Platform
Make sure you understand the Open Data Platform and its relevance to the role. Research its components and be prepared to discuss how you've used it or similar platforms in your past work.
✨Prepare Questions for the Interviewers
Think of insightful questions to ask about the company's big data projects and team dynamics. This shows your genuine interest in the role and helps you assess if the company is the right fit for you.