At a Glance
- Tasks: Design and build data pipelines using Microsoft Fabric and Azure for real business impact.
- Company: Dynamic investment firm focused on innovative data solutions.
- Benefits: Hybrid work model, competitive pay, and opportunities for professional growth.
- Other info: Join a small team where your ideas shape the future of data architecture.
- Why this job: Make a tangible difference with your work in a fast-paced, impactful environment.
- Qualifications: Experience as a Data Engineer with strong skills in Microsoft Fabric and Azure.
The predicted salary is between 60000 - 80000 £ per year.
London | Hybrid
Would you be interested in a Data Engineer contract role focused on building a Microsoft Fabric based data platform used directly by investment teams? This is a role where your work is used day to day by the business. You will be building data products that drive real decisions rather than sitting in a reporting function.
The role
- You’ll join a small, high impact team responsible for delivering production grade pipelines and shaping how data is owned and used across the business.
- You will work closely with stakeholders across the desk, building solutions that solve real problems and get used immediately.
- There is a strong focus on moving towards a federated data model, so you will have the chance to influence how that is designed and implemented.
What you’ll be doing
- Designing and building data pipelines using Microsoft Fabric and Azure
- Developing ELT workflows to turn raw data into usable datasets
- Working directly with internal teams to deliver data solutions
- Contributing to a move towards a federated data architecture
- Improving performance, scalability and reliability across the platform
- Applying best practices across data modelling and pipeline design
- Supporting analytics and AI use cases with high quality data
What they’re looking for
- Strong experience as a Data Engineer in production environments
- Hands on experience with Microsoft Fabric
- Strong Azure experience across services like Data Factory or Synapse
- Advanced Python for data engineering
- Strong SQL and data modelling experience, ideally with dbt
- Experience with orchestration tools such as Airflow or similar
- Understanding of modern data architecture, exposure to data mesh is useful
- Experience working with CI/CD and Git based workflows
Why you should apply
- Your work will be directly used by the business, not hidden behind reporting layers
- You’ll be part of a genuine move towards a data mesh model
- You’ll have real ownership in a small team where your input shapes the platform
- Fast moving environment where you can make an impact quickly
Data Engineer employer: DW Search
Contact Detail:
DW Search Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who work with Microsoft Fabric or Azure. A friendly chat can lead to insider info about job openings that aren't even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines and projects you've worked on. This is your chance to demonstrate how you can turn raw data into actionable insights, just like the role requires.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge. Be ready to discuss your experience with ELT workflows, SQL, and orchestration tools like Airflow. We want to see how you can contribute to a federated data architecture!
✨Tip Number 4
Don't forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you're genuinely interested in joining our team and making an impact in the data space.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Microsoft Fabric, Azure, and any relevant projects that showcase your skills in building data pipelines and working with ELT workflows.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how your background aligns with our focus on building impactful data solutions. Don’t forget to mention your experience with modern data architecture!
Showcase Your Technical Skills: When listing your technical skills, be specific! Mention your proficiency in Python, SQL, and any orchestration tools like Airflow. We want to see how you can contribute to improving performance and scalability across our platform.
Apply Through Our Website: We encourage you to apply through our website for a smoother application process. It helps us keep track of your application and ensures you don’t miss out on any important updates from us!
How to prepare for a job interview at DW Search
✨Know Your Tech Stack
Make sure you brush up on your experience with Microsoft Fabric and Azure services like Data Factory or Synapse. Be ready to discuss specific projects where you've used these technologies, as this will show your hands-on expertise.
✨Showcase Your Problem-Solving Skills
Prepare examples of how you've built data pipelines or solved real business problems in previous roles. Highlight your contributions to improving performance and reliability, as this aligns perfectly with what the company is looking for.
✨Understand the Federated Data Model
Familiarise yourself with the concept of a federated data architecture and be prepared to discuss how you would approach designing and implementing it. This shows that you're not just technically skilled but also strategic in your thinking.
✨Engage with Stakeholders
Since you'll be working closely with internal teams, think about how you can demonstrate your communication skills. Prepare to discuss how you've collaborated with stakeholders in the past to deliver data solutions that meet their needs.