At a Glance
- Tasks: Elevate data infrastructure and manage end-to-end DataOps processes in a dynamic team.
- Company: Join a forward-thinking company revolutionizing the home-moving experience.
- Benefits: Enjoy flexible hours, hybrid work, generous holidays, and private medical insurance.
- Why this job: Be part of an innovative team focused on data reliability and scalability with strong career development.
- Qualifications: Expertise in Azure, Databricks, Kubernetes, and Infrastructure as Code required.
- Other info: Work remotely with monthly meetings in Oxford and participate in regular social events.
The predicted salary is between 48000 - 112000 £ per year.
Senior Data Engineer Location: Remote working (Once a month in Oxford) Salary: £80k + competitive benefits package Role Overview: Our client is seeking a Senior Data Engineer to elevate their data infrastructure’s reliability and scalability. Reporting to the Data & Analytics Director, you will drive the development of a larger data mesh setup while managing end-to-end DataOps/DevOps processes in a dynamic, small team. This role will focus on improving data ingestion, processing, and infrastructure, primarily within Databricks, and ensuring the quality and stability of the entire system. Key Responsibilities: Enhance data infrastructure for stability and reliability Manage data ingestion pipelines (ADF, Python) Oversee Kubernetes deployments (Airflow, Superset, RStudio Connect) Support Data Analytics via DBT, deployment rules, and DevOps Skills & Experience: Expertise in Azure/Databricks/Delta Lake Proficiency in Kubernetes, Terraform, and Infrastructure as Code Strong knowledge of Azure DevOps Innovative, with a focus on simplicity and transparency Benefits: Flexible hours, hybrid working, generous holiday package Private medical insurance, pension scheme, volunteering day, and more Regular social events and a strong focus on career development Join our client in revolutionizing the home-moving experience!
Senior Data Engineer employer: Harnham
Contact Detail:
Harnham Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer
✨Tip Number 1
Familiarize yourself with the specific tools and technologies mentioned in the job description, such as Azure, Databricks, and Kubernetes. Having hands-on experience or relevant projects to discuss can set you apart during the interview.
✨Tip Number 2
Showcase your understanding of DataOps and DevOps processes. Be prepared to discuss how you've implemented these practices in previous roles, as this will demonstrate your ability to manage end-to-end data workflows effectively.
✨Tip Number 3
Highlight any experience you have with data ingestion pipelines and tools like ADF and Python. Providing examples of how you've improved data processing or infrastructure stability in past projects can make a strong impression.
✨Tip Number 4
Emphasize your innovative mindset and focus on simplicity and transparency. Prepare to share instances where you've simplified complex processes or contributed to a more transparent data environment, as this aligns with the company's values.
We think you need these skills to ace Senior Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Azure, Databricks, and Kubernetes. Use specific examples that demonstrate your expertise in data ingestion pipelines and DevOps processes.
Craft a Compelling Cover Letter: In your cover letter, express your passion for data engineering and how your skills align with the role's responsibilities. Mention your innovative approach to improving data infrastructure and your experience working in dynamic teams.
Showcase Relevant Projects: If you have worked on projects involving ADF, Python, or DBT, be sure to include these in your application. Describe your role and the impact of your contributions on the project's success.
Highlight Soft Skills: Don't forget to mention your soft skills, such as teamwork, communication, and problem-solving abilities. These are crucial for collaborating effectively in a small team environment.
How to prepare for a job interview at Harnham
✨Showcase Your Technical Expertise
Be prepared to discuss your experience with Azure, Databricks, and Delta Lake in detail. Highlight specific projects where you've enhanced data infrastructure or managed data ingestion pipelines using ADF and Python.
✨Demonstrate Your Problem-Solving Skills
Expect questions that assess your ability to troubleshoot and optimize data processes. Share examples of challenges you've faced in previous roles and how you resolved them, particularly in a DataOps/DevOps context.
✨Familiarize Yourself with Kubernetes and Terraform
Since the role involves overseeing Kubernetes deployments, brush up on your knowledge of Kubernetes, Airflow, and Infrastructure as Code. Be ready to discuss how you've utilized these tools in past projects.
✨Emphasize Your Innovative Mindset
The company values simplicity and transparency, so be sure to convey your innovative approach to data engineering. Discuss any initiatives you've led that improved efficiency or clarity within data processes.