At a Glance
- Tasks: Design and build scalable data pipelines using Azure and Databricks.
- Company: Join a forward-thinking data team in a modern tech environment.
- Benefits: Competitive salary, flexible work options, and opportunities for professional growth.
- Other info: Collaborative culture with a focus on innovation and best practices.
- Why this job: Make a real impact by delivering robust data solutions at scale.
- Qualifications: Experience with Azure, Databricks, and building scalable data pipelines.
The predicted salary is between 55000 - 70000 £ per year.
We’re looking for a Data Engineer to join a high-performing data function, playing a key role in building and scaling a modern Azure-based data platform. This is an opportunity to work on a Databricks lakehouse environment, delivering robust, scalable pipelines that power critical analytics and business insights.
The Role
You’ll be responsible for designing, building, and maintaining end-to-end data pipelines, taking data from source systems through to curated datasets ready for reporting and analytics. Working closely with architecture and delivery teams, you’ll help shape a high-quality, governed, and observable data platform.
What You’ll Be Doing
- Build and maintain Azure & Databricks pipelines
- Develop scalable ELT processes using PySpark
- Implement data quality, lineage, and security controls
- Own CI/CD pipelines and Infrastructure as Code (Terraform)
- Ensure pipelines are testable, observable, and easy to troubleshoot
- Optimise data services for performance and cost efficiency
- Collaborate with stakeholders to translate requirements into data solutions
- Contribute to Agile delivery with clear documentation and best practices
What We’re Looking For
- Strong experience with Azure Data Platform & Databricks
- Proven track record building scalable data pipelines
- Hands-on with PySpark / Spark-based processing
- Experience with Terraform & CI/CD pipelines
- Solid understanding of data governance, quality, and lineage
- Familiarity with Git-based workflows and DevOps practices
- Strong communication skills and ability to work with cross-functional teams
Why Apply?
- Work on a modern lakehouse architecture
- Be part of a forward-thinking data team
- Influence engineering standards and platform design
- Deliver impactful data solutions at scale
Data Engineer in Solihull employer: RedRock Resourcing
Contact Detail:
RedRock Resourcing Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in Solihull
✨Tip Number 1
Network like a pro! Reach out to folks in the data engineering space, especially those working with Azure and Databricks. Attend meetups or webinars, and don’t be shy about asking for informational interviews – you never know where a chat might lead!
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving scalable data pipelines and ELT processes. Share it on platforms like GitHub, and make sure to highlight your experience with PySpark and Terraform.
✨Tip Number 3
Prepare for technical interviews by brushing up on your knowledge of data governance and CI/CD practices. Practice coding challenges related to data pipelines and be ready to discuss how you’ve optimised performance and cost efficiency in past projects.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive and engaged with our platform.
We think you need these skills to ace Data Engineer in Solihull
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Azure, Databricks, and building scalable data pipelines. We want to see how your skills align with what we're looking for, so don’t be shy about showcasing your relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're excited about the Data Engineer role and how your background makes you a perfect fit. We love seeing passion and personality, so let that come through!
Showcase Your Technical Skills: When filling out your application, make sure to mention your hands-on experience with PySpark, Terraform, and CI/CD pipelines. We’re keen on candidates who can demonstrate their technical prowess, so don’t hold back!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it’s super easy – just follow the prompts and you’ll be all set!
How to prepare for a job interview at RedRock Resourcing
✨Know Your Tech Stack
Make sure you’re well-versed in Azure, Databricks, and PySpark. Brush up on your knowledge of building scalable data pipelines and be ready to discuss specific projects where you've implemented these technologies.
✨Showcase Your Problem-Solving Skills
Prepare to talk about challenges you've faced in previous roles, especially around data quality and governance. Use the STAR method (Situation, Task, Action, Result) to structure your answers and highlight how you tackled those issues.
✨Understand CI/CD and Infrastructure as Code
Familiarise yourself with Terraform and CI/CD processes. Be prepared to explain how you’ve used these tools in past projects to automate deployments and ensure code quality.
✨Communicate Effectively
Since collaboration is key, practice articulating your thoughts clearly. Think about how you can translate technical jargon into layman's terms for stakeholders who may not have a technical background.