At a Glance
- Tasks: Design and maintain scalable data solutions using Microsoft Fabric and Databricks.
- Company: Join a leading recruiter in Data & AI with a vibrant culture.
- Benefits: Enjoy hybrid work, competitive pay, and opportunities for professional growth.
- Why this job: Make an impact by building reliable data platforms that drive analytics.
- Qualifications: Experience with Microsoft Fabric, Databricks, PySpark, and SQL is essential.
- Other info: Collaborate with a dynamic team and enhance your skills in a supportive environment.
The predicted salary is between 36000 - 60000 £ per year.
We are looking for a skilled Fabric & Databricks Engineer to design, build, and maintain scalable analytics and data engineering solutions. You will work at the core of our data platform, enabling analytics, reporting, and advanced data use cases by leveraging Microsoft Fabric and Databricks.
You will collaborate closely with data analysts, data scientists, and stakeholders to deliver reliable, performant, and secure data pipelines and models.
Key Responsibilities- Design, develop, and maintain end-to-end data pipelines using Microsoft Fabric and Databricks
- Build and optimize Lakehouse architectures using Delta Lake principles
- Ingest, transform, and curate data from multiple sources (APIs, databases, files, streaming)
- Develop scalable data transformations using PySpark and Spark SQL
- Implement data models optimized for analytics and reporting (e.g., star schemas)
- Monitor, troubleshoot, and optimize performance and cost of data workloads
- Apply data quality, validation, and governance best practices
- Collaborate with analysts and BI teams to enable self-service analytics
- Contribute to CI/CD pipelines and infrastructure-as-code for data platforms
- Ensure security, access controls, and compliance across the data estate
- Document solutions and promote engineering best practices
- Strong experience with Microsoft Fabric (Lakehouse, Pipelines, Notebooks, Dataflows, OneLake)
- Hands-on experience with Databricks in production environments
- Proficiency in PySpark and SQL
- Solid understanding of data engineering concepts (ETL/ELT, orchestration, partitioning)
- Experience working with Delta Lake
- Familiarity with cloud platforms (Azure preferred)
- Experience integrating data from relational and non-relational sources
- Knowledge of data modeling for analytics
- Experience with version control (Git) and collaborative development workflows
- Experience with Power BI and semantic models
- Exposure to streaming technologies (Kafka, Event Hubs, Spark Structured Streaming)
- Infrastructure-as-code experience (Bicep, Terraform)
- CI/CD tooling (Azure DevOps, GitHub Actions)
- Familiarity with data governance and cataloging tools
- Experience supporting ML or advanced analytics workloads
- Strong problem-solving and analytical mindset
- Ability to work independently and as part of a cross-functional team
- Clear communication skills and stakeholder awareness
- Passion for building reliable, scalable data platforms
Fabric And Databricks Data Engineer - Outside IR35 - Hybrid in England employer: Tenth Revolution Group
Contact Detail:
Tenth Revolution Group Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Fabric And Databricks Data Engineer - Outside IR35 - Hybrid in England
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who work with Microsoft Fabric and Databricks. A friendly chat can lead to insider info about job openings that aren't even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects with data pipelines, PySpark, and any cool stuff you've built using Databricks. This will give potential employers a taste of what you can do beyond just your CV.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering scenarios. Be ready to discuss how you've tackled challenges with data quality and governance. We want to see your problem-solving skills in action!
✨Tip Number 4
Don't forget to apply through our website! It’s the best way to ensure your application gets noticed. Plus, we love seeing candidates who are proactive about their job search.
We think you need these skills to ace Fabric And Databricks Data Engineer - Outside IR35 - Hybrid in England
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Microsoft Fabric and Databricks. We want to see how your skills align with the role, so don’t be shy about showcasing relevant projects or achievements!
Showcase Your Technical Skills: When writing your application, emphasise your proficiency in PySpark, SQL, and data engineering concepts. We’re looking for someone who can hit the ground running, so let us know what you bring to the table!
Be Clear and Concise: Keep your application straightforward and to the point. We appreciate clarity, so avoid jargon and focus on what makes you a great fit for the role. Remember, less is often more!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for this exciting opportunity. Don’t miss out!
How to prepare for a job interview at Tenth Revolution Group
✨Know Your Tech Inside Out
Make sure you’re well-versed in Microsoft Fabric and Databricks. Brush up on your knowledge of Lakehouse architectures, Delta Lake principles, and data transformation techniques using PySpark and Spark SQL. Being able to discuss these topics confidently will show that you’re the right fit for the role.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific challenges you've faced in previous projects and how you tackled them. Think about scenarios where you optimised data pipelines or improved performance. This will demonstrate your analytical mindset and problem-solving abilities, which are crucial for this position.
✨Collaborate and Communicate
Since the role involves working closely with data analysts and stakeholders, be ready to talk about your experience in collaborative environments. Share examples of how you’ve communicated complex technical concepts to non-technical team members, as clear communication is key in cross-functional teams.
✨Understand Data Governance and Best Practices
Familiarise yourself with data quality, validation, and governance best practices. Be prepared to discuss how you’ve implemented these in past roles, as well as your understanding of security and compliance across data estates. This knowledge will set you apart as a candidate who prioritises reliability and security.