At a Glance
- Tasks: Design and build Big Data platforms using Apache Hadoop, supporting cloud and on-premises applications.
- Company: HCLTech is a global tech leader with over 219,000 employees, delivering innovative solutions across various industries.
- Benefits: Enjoy a dynamic work environment with opportunities for remote work and collaboration on open-source projects.
- Why this job: Join a team of experts, contribute to impactful open-source projects, and enhance your skills in a cutting-edge field.
- Qualifications: Experience in platform engineering, application engineering, and a strong understanding of the Apache ecosystem required.
- Other info: Ideal for passionate individuals eager to make a mark in the Big Data landscape.
The predicted salary is between 43200 - 72000 £ per year.
HCLTech is a global technology company, home to 219,000+ people across 54 countries, delivering industry-leading capabilities centered on digital, engineering and cloud, powered by a broad portfolio of technology services and products. We work with clients across all major verticals, providing industry solutions for Financial Services, Manufacturing, Life Sciences and Healthcare, Technology and Services, Telecom and Media, Retail and CPG, and Public Services. Consolidated revenues as of $13+ billion.
Location - London
Skill - Apache Hadoop
We are looking for open-source contributors to Apache projects, who have an in-depth understanding of the code behind the Apache ecosystem, should have experience in Cloudera or similar distribution and should possess in depth knowledge of big data tech stack.
Requirement:
- Experience of platform engineering along with application engineering (hands-on)
- Experience in design of an open source platform based on Apache framework for Hadoop.
- Experience in integrating Infra-as-a-Code in their platform (Bespoke implementation from scratch)
- Experience of design & architect work for the open source Apache platform in hybrid cloud environment
- Ability to debug & fix code in the open source Apache code and should be an individual contributor to open source projects.
Job description:
The Apache Hadoop project requires up to 3 individuals with experience in designing and building platforms, and supporting applications both in cloud environments and on-premises. These resources are expected to be open source contributors to Apache projects, have an in-depth understanding of the code behind the Apache ecosystem, and be capable of identifying and fixing complex issues during delivery. 3 individuals required to support all developers in migrating and debugging various RiskFinder critical applications. They need to be "Developers" who are expert in designing and building Big Data platforms using Apache Hadoop and support Apache Hadoop implementations both in cloud environments and on-premises.
Big Data Architect (Open source contributor) employer: JobFlurry
Contact Detail:
JobFlurry Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Big Data Architect (Open source contributor)
✨Tip Number 1
Engage with the Apache community by contributing to forums and discussions. This will not only showcase your expertise but also help you network with other professionals in the field, which can lead to job opportunities.
✨Tip Number 2
Showcase your open-source contributions on platforms like GitHub. Highlighting your work on relevant Apache projects can demonstrate your hands-on experience and commitment to the community, making you a more attractive candidate.
✨Tip Number 3
Attend industry meetups or conferences focused on big data and Apache technologies. These events are great for networking and can provide insights into what companies like HCLTech are looking for in candidates.
✨Tip Number 4
Familiarise yourself with the latest trends and updates in the Apache ecosystem. Being knowledgeable about recent developments can give you an edge during interviews and show your passion for the field.
We think you need these skills to ace Big Data Architect (Open source contributor)
Some tips for your application 🫡
Understand the Role: Before applying, make sure you fully understand the requirements and responsibilities of a Big Data Architect at HCLTech. Familiarise yourself with Apache Hadoop and the specific skills they are looking for, such as experience in platform engineering and open-source contributions.
Tailor Your CV: Customise your CV to highlight relevant experience in big data technologies, particularly with Apache projects. Emphasise your hands-on experience with Cloudera or similar distributions, and any previous roles where you contributed to open-source platforms.
Craft a Compelling Cover Letter: Write a cover letter that showcases your passion for open-source contributions and your understanding of the Apache ecosystem. Mention specific projects you've worked on and how your skills align with the needs of HCLTech.
Showcase Your Projects: If you have contributed to any open-source projects, be sure to include links or descriptions of your work. Highlight your role in debugging and fixing code, as well as any design and architecture work you've done in hybrid cloud environments.
How to prepare for a job interview at JobFlurry
✨Showcase Your Open Source Contributions
Make sure to highlight any previous contributions you've made to Apache projects or similar open-source initiatives. Discuss specific examples of your work, the challenges you faced, and how you overcame them.
✨Demonstrate Technical Expertise
Be prepared to discuss your in-depth knowledge of the Apache ecosystem, particularly Hadoop. Brush up on key concepts, recent updates, and best practices, as you may be asked technical questions that assess your understanding.
✨Prepare for Scenario-Based Questions
Expect scenario-based questions that test your problem-solving skills. Think about past experiences where you had to debug complex issues or design a platform from scratch, and be ready to explain your thought process.
✨Understand the Company’s Vision
Research HCLTech and its role in the industry. Familiarise yourself with their projects and values, especially in relation to big data and cloud solutions. This will help you align your answers with their goals during the interview.