At a Glance
- Tasks: Design and build scalable data pipelines using Azure and Databricks.
- Company: Join a forward-thinking data team in a modern tech environment.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Other info: Collaborative culture with a focus on innovation and best practices.
- Why this job: Make a real impact by delivering robust data solutions at scale.
- Qualifications: Experience with Azure, Databricks, and building scalable data pipelines.
The predicted salary is between 50000 - 70000 £ per year.
We’re looking for a Data Engineer to join a high-performing data function, playing a key role in building and scaling a modern Azure-based data platform. This is an opportunity to work on a Databricks lakehouse environment, delivering robust, scalable pipelines that power critical analytics and business insights.
The Role
You’ll be responsible for designing, building, and maintaining end-to-end data pipelines, taking data from source systems through to curated datasets ready for reporting and analytics. Working closely with architecture and delivery teams, you’ll help shape a high-quality, governed, and observable data platform.
What You’ll Be Doing
- Build and maintain Azure & Databricks pipelines
- Develop scalable ELT processes using PySpark
- Implement data quality, lineage, and security controls
- Own CI/CD pipelines and Infrastructure as Code (Terraform)
- Ensure pipelines are testable, observable, and easy to troubleshoot
- Optimise data services for performance and cost efficiency
- Collaborate with stakeholders to translate requirements into data solutions
- Contribute to Agile delivery with clear documentation and best practices
What We’re Looking For
- Strong experience with Azure Data Platform & Databricks
- Proven track record building scalable data pipelines
- Hands-on with PySpark / Spark-based processing
- Experience with Terraform & CI/CD pipelines
- Solid understanding of data governance, quality, and lineage
- Familiarity with Git-based workflows and DevOps practices
- Strong communication skills and ability to work with cross-functional teams
Why Apply?
- Work on a modern lakehouse architecture
- Be part of a forward-thinking data team
- Influence engineering standards and platform design
- Deliver impactful data solutions at scale
Data Engineer in Birmingham employer: RedRock Resourcing
Contact Detail:
RedRock Resourcing Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in Birmingham
✨Tip Number 1
Network like a pro! Reach out to folks in the data engineering space, especially those working with Azure and Databricks. Attend meetups or webinars, and don’t be shy about sliding into DMs on LinkedIn – you never know who might have the inside scoop on job openings!
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your work with Azure Data Platform and Databricks. Include projects where you've built scalable data pipelines or implemented CI/CD processes. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge. Be ready to discuss your experience with PySpark, Terraform, and data governance. Practise explaining complex concepts in simple terms – it shows you can communicate effectively with cross-functional teams, which is key in this role.
✨Tip Number 4
Don’t forget to apply through our website! We love seeing candidates who are genuinely interested in joining our team. Tailor your application to highlight your relevant experience and how you can contribute to building a modern data platform with us.
We think you need these skills to ace Data Engineer in Birmingham
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Azure, Databricks, and building scalable data pipelines. We want to see how your skills align with what we're looking for, so don’t be shy about showcasing your relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're excited about the Data Engineer role and how your background makes you a perfect fit. We love seeing genuine enthusiasm and a clear understanding of our needs.
Showcase Your Technical Skills: When detailing your experience, focus on specific technologies like PySpark, Terraform, and CI/CD pipelines. We’re keen to know how you've used these tools in past projects, so give us the details that demonstrate your expertise!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re serious about joining our team!
How to prepare for a job interview at RedRock Resourcing
✨Know Your Tech Stack
Make sure you’re well-versed in Azure, Databricks, and PySpark. Brush up on your knowledge of building scalable data pipelines and be ready to discuss specific projects where you've implemented these technologies.
✨Showcase Your Problem-Solving Skills
Prepare to talk about challenges you've faced in previous roles, especially around data quality and governance. Use the STAR method (Situation, Task, Action, Result) to structure your answers and highlight how you tackled those issues.
✨Understand CI/CD and Infrastructure as Code
Familiarise yourself with Terraform and CI/CD pipelines. Be prepared to explain how you’ve used these tools in past projects to automate deployments and ensure code quality. This will show that you can contribute to their Agile delivery process.
✨Communicate Effectively
Since collaboration is key, practice articulating your thoughts clearly. Think about how you would explain complex data solutions to non-technical stakeholders. Good communication can set you apart from other candidates.