At a Glance
- Tasks: Join an Agile team as a Data Engineer, working with Databricks, Python, and SQL.
- Company: Be part of a global tech leader known for innovation and excellence.
- Benefits: Enjoy competitive salary, pension, and extensive training opportunities.
- Why this job: Boost your career with unlimited progression in a dynamic, supportive environment.
- Qualifications: Experience in Python, PySpark, SQL, and a passion for Data Science is essential.
- Other info: Open to candidates from all backgrounds; apply now to make an impact!
The predicted salary is between 34000 - 58000 £ per year.
Job Description
Data Engineer (Databricks) – Leeds
(Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI / CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer)
Our client is a global innovator and world leader with one of the most recognisable names within technology. They are looking for Data Engineers with significant Databricks experience to join an exceptional Agile engineering team.
We are seeking a Data Engineer with strong Python, PySpark and SQL experience, possess a clear understanding of databricks, as well as a passion for Data Science (R, Machine Learning and AI). Database experience with SQL and No-SQL – Aurora, MS SQL Server, MySQL is expected, as well as significant Agile and Scrum exposure along with SOLID principles. Continuous Integration tools, Infrastructure as code and strong Cloud Platform knowledge, ideally with AWS is also key.
We are keen to hear from talented Data Engineer candidates from all backgrounds.
This is a truly amazing opportunity to work for a prestigious brand that will do wonders for your career. They invest heavily in training and career development with unlimited career progression for top performers.
Location: Leeds
Salary: £40k – £50k + Pension + Benefits
To apply for this position please send your CV to Nathan Warner at Noir Consulting.
(Data Engineer, Python, PySpark, SQL, Big Data, Databricks, R, Machine Learning, AI, Agile, Scrum, TDD, BDD, CI / CD, SOLID principles, Github, Azure DevOps, Jenkins, Terraform, AWS CDK, AWS CloudFormation, Azure, Data Engineer)
NOIRUKTECHREC
NOIRUKREC
Data Engineer Databricks - Leeds employer: Noir
Contact Detail:
Noir Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer Databricks - Leeds
✨Tip Number 1
Familiarise yourself with Databricks and its ecosystem. Since the role specifically requires significant experience with Databricks, consider exploring online tutorials or courses that focus on this platform to enhance your understanding and showcase your commitment.
✨Tip Number 2
Engage with the Data Engineering community on platforms like LinkedIn or GitHub. Networking with professionals in the field can provide insights into the latest trends and technologies, and may even lead to referrals for the position.
✨Tip Number 3
Brush up on Agile methodologies and practices. Since the job mentions Agile and Scrum, being able to discuss your experience with these frameworks during an interview will demonstrate your fit for the team dynamics.
✨Tip Number 4
Prepare to discuss your experience with Continuous Integration tools and Infrastructure as Code. Being well-versed in tools like Jenkins, Terraform, and Azure DevOps will set you apart, so be ready to share specific examples of how you've used them in past projects.
We think you need these skills to ace Data Engineer Databricks - Leeds
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Databricks, Python, PySpark, and SQL. Use specific examples to demonstrate your skills in these areas, as well as your familiarity with Agile methodologies and cloud platforms like AWS.
Craft a Compelling Cover Letter: Write a cover letter that showcases your passion for data engineering and your understanding of the role. Mention your experience with machine learning and AI, and how you can contribute to the company's goals.
Highlight Relevant Projects: Include any relevant projects or experiences that showcase your expertise in big data technologies and continuous integration tools. This could be personal projects, contributions to open-source, or professional work that aligns with the job requirements.
Proofread and Format: Before submitting your application, ensure that your documents are free from errors and formatted professionally. A clean, well-organised application reflects your attention to detail, which is crucial for a Data Engineer.
How to prepare for a job interview at Noir
✨Showcase Your Technical Skills
Make sure to highlight your experience with Python, PySpark, and SQL during the interview. Be prepared to discuss specific projects where you've used Databricks and how you applied your knowledge of big data technologies.
✨Demonstrate Agile Methodologies
Since the role requires significant Agile and Scrum exposure, be ready to talk about your experience working in Agile teams. Share examples of how you've contributed to sprints and how you handle changing requirements.
✨Discuss Continuous Integration Practices
Familiarity with CI/CD tools is crucial for this position. Be prepared to explain how you've implemented these practices in previous roles, particularly using tools like Jenkins or Azure DevOps.
✨Express Your Passion for Data Science
The company values a passion for data science, so don't hesitate to share your interests in R, machine learning, and AI. Discuss any relevant projects or coursework that demonstrate your enthusiasm and knowledge in these areas.