At a Glance
- Tasks: Design and deliver scalable data projects using cutting-edge tools like Databricks and Azure.
- Company: Leading Microsoft consultancy driving innovation in cloud transformation.
- Benefits: Fully remote work, training opportunities, 25 days holiday, and private health insurance.
- Why this job: Join a fast-growing team and make a real impact on exciting cloud data projects.
- Qualifications: Experience with Azure, Databricks, SQL, and Python is essential.
- Other info: Enjoy a dynamic work environment with clear career pathways and perks.
The predicted salary is between 48000 - 64000 £ per year.
I'm working with a client who is a leading Microsoft consultancy that's growing fast and shaping the future of data. They are continuing to grow, driving innovation and helping clients accelerate their cloud transformation journeys. With a pipeline of cutting‑edge projects, this is your chance to work at the forefront of cloud data engineering and make a real impact.
You’ll Work With:
- Designing and delivering scalable projects using Databricks, Synapse & Fabric
- Build and optimise ETL/ELT pipelines and data models with SQL & Python
- Create advanced Power BI dashboards for actionable insights
- Implement data lakes and medallion lakehouse architecture
- Ensure data quality, governance & security across all solutions
- Collaborate in an Agile environment with cross-functional teams
- Drive cloud migrations and champion best practices in data engineering
Benefits:
- Rapid Growth & Exciting Projects - Work on cutting‑edge Microsoft Cloud solutions
- Investment in YOU - Training, certifications & clear career pathways
- Fully Remote - Home‑based contract with travel expenses covered
- 25 days holiday
- Private health insurance (after one year)
- Life assurance (4x base salary)
- Enhanced parental pay
- Perkbox, cycle scheme, electric car scheme
Key experience:
- Strong background in Azure Synapse, Databricks, and/or Microsoft Fabric
- Expertise in ETL/ELT development using SQL & Python
- Experience with data lakes and large‑scale datasets
- Solid understanding of BI & data warehousing concepts
Ready to take the next step in your career? Don't delay, apply now!
Data Engineer employer: Jefferson Frank
Contact Detail:
Jefferson Frank Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Network like a pro! Reach out to folks in the industry on LinkedIn or at meetups. We all know that sometimes it’s not just what you know, but who you know that can land you that dream Data Engineer role.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects with Databricks, SQL, and Python. We want to see what you can do, so make sure to highlight those cutting-edge solutions you've worked on.
✨Tip Number 3
Prepare for those interviews! Brush up on your technical knowledge and be ready to discuss your experience with Azure and data lakes. We recommend practicing common interview questions and even doing mock interviews with friends.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we’re always looking for passionate individuals ready to drive innovation in cloud data engineering.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Azure, Databricks, and SQL. We want to see how your skills align with the role, so don’t be shy about showcasing relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Tell us why you’re excited about this role and how you can contribute to our innovative projects. Keep it engaging and personal.
Showcase Your Projects: If you've worked on any cool data engineering projects, make sure to mention them! We love seeing real examples of your work, especially if they involve ETL/ELT pipelines or Power BI dashboards.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any important updates from us!
How to prepare for a job interview at Jefferson Frank
✨Know Your Tech Stack
Make sure you’re well-versed in Azure Synapse, Databricks, and Microsoft Fabric. Brush up on your ETL/ELT development skills using SQL and Python, as these will likely come up during the interview. Being able to discuss specific projects where you've used these technologies will really impress.
✨Showcase Your Problem-Solving Skills
Prepare to discuss how you've tackled challenges in data engineering. Think of examples where you optimised ETL pipelines or implemented data lakes. This shows that you not only understand the theory but can apply it in real-world scenarios.
✨Understand Agile Methodologies
Since the role involves collaboration in an Agile environment, be ready to talk about your experience working in Agile teams. Highlight any specific methodologies you’ve used and how they helped improve project outcomes.
✨Ask Insightful Questions
Interviews are a two-way street! Prepare thoughtful questions about the company’s current projects, their approach to cloud migrations, or how they ensure data quality and governance. This shows your genuine interest and helps you assess if the company is the right fit for you.