At a Glance
- Tasks: Elevate data infrastructure and manage end-to-end DataOps processes in a small, dynamic team.
- Company: Join a forward-thinking company revolutionizing the home-moving experience with innovative data solutions.
- Benefits: Enjoy flexible hours, hybrid work, generous holidays, private medical insurance, and career development opportunities.
- Why this job: Be part of a transformative project while working remotely and collaborating with a passionate team.
- Qualifications: Expertise in Azure, Databricks, Kubernetes, and strong knowledge of Azure DevOps required.
- Other info: Monthly in-person meetings in Oxford to foster team collaboration.
The predicted salary is between 48000 - 112000 £ per year.
Senior Data Engineer Location: Remote working (Once a month in Oxford) Salary: £80k + competitive benefits package Role Overview: Our client is seeking a Senior Data Engineer to elevate their data infrastructure’s reliability and scalability. Reporting to the Data & Analytics Director, you will drive the development of a larger data mesh setup while managing end-to-end DataOps/DevOps processes in a dynamic, small team. This role will focus on improving data ingestion, processing, and infrastructure, primarily within Databricks, and ensuring the quality and stability of the entire system. Key Responsibilities: Enhance data infrastructure for stability and reliability Manage data ingestion pipelines (ADF, Python) Oversee Kubernetes deployments (Airflow, Superset, RStudio Connect) Support Data Analytics via DBT, deployment rules, and DevOps Skills & Experience: Expertise in Azure/Databricks/Delta Lake Proficiency in Kubernetes, Terraform, and Infrastructure as Code Strong knowledge of Azure DevOps Innovative, with a focus on simplicity and transparency Benefits: Flexible hours, hybrid working, generous holiday package Private medical insurance, pension scheme, volunteering day, and more Regular social events and a strong focus on career development Join our client in revolutionizing the home-moving experience!
Senior Data Engineer employer: Harnham
Contact Detail:
Harnham Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer
✨Tip Number 1
Familiarize yourself with the specific tools and technologies mentioned in the job description, such as Azure, Databricks, and Kubernetes. Having hands-on experience or relevant projects to discuss can set you apart during conversations.
✨Tip Number 2
Network with professionals in the data engineering field, especially those who work with data mesh setups or have experience in DataOps/DevOps. Engaging in discussions on platforms like LinkedIn can help you gain insights and potentially get referrals.
✨Tip Number 3
Prepare to discuss your previous experiences with data ingestion pipelines and how you've managed them in past roles. Be ready to share specific examples of challenges you faced and how you overcame them.
✨Tip Number 4
Showcase your innovative mindset by thinking of ways to improve data infrastructure and processes. During interviews, share ideas that reflect your ability to simplify complex systems while ensuring reliability and stability.
We think you need these skills to ace Senior Data Engineer
Some tips for your application 🫡
Understand the Role: Make sure to thoroughly read the job description for the Senior Data Engineer position. Highlight the key responsibilities and required skills, and think about how your experience aligns with them.
Tailor Your CV: Customize your CV to emphasize your expertise in Azure, Databricks, and Kubernetes. Include specific examples of past projects where you managed data ingestion pipelines or worked with Infrastructure as Code.
Craft a Compelling Cover Letter: Write a cover letter that showcases your innovative approach to data engineering. Mention your experience with DataOps/DevOps processes and how you can contribute to enhancing the data infrastructure.
Highlight Relevant Projects: In your application, include details about relevant projects you've worked on, especially those involving data mesh setups or using tools like Airflow and DBT. This will demonstrate your hands-on experience and problem-solving skills.
How to prepare for a job interview at Harnham
✨Showcase Your Technical Expertise
Be prepared to discuss your experience with Azure, Databricks, and Delta Lake in detail. Highlight specific projects where you've enhanced data infrastructure or managed data ingestion pipelines using ADF and Python.
✨Demonstrate Your Problem-Solving Skills
Expect questions that assess your ability to troubleshoot and optimize data processes. Share examples of challenges you've faced in previous roles and how you resolved them, particularly in a DataOps/DevOps context.
✨Familiarize Yourself with Kubernetes and Terraform
Since the role involves overseeing Kubernetes deployments, make sure you can discuss your experience with Kubernetes, Airflow, and Infrastructure as Code. Be ready to explain how you've used these tools to improve system reliability.
✨Emphasize Your Innovative Mindset
The company values innovation and simplicity. Prepare to share ideas on how you would approach improving data ingestion and processing. Discuss any past initiatives where you implemented innovative solutions that led to significant improvements.