At a Glance
- Tasks: Develop and optimise data pipelines using Microsoft Fabric, Azure, and Python.
- Company: WTW is a global leader in advisory, broking, and solutions across various industries.
- Benefits: Enjoy hybrid work options, career development opportunities, and a diverse, inclusive culture.
- Why this job: Join innovative projects that shape the insurance sector with cutting-edge technologies like AI.
- Qualifications: 2+ years as a Data Engineer with expertise in Microsoft Fabric, Azure, and Python required.
- Other info: Be part of a team that values diversity and fosters an inclusive work environment.
The predicted salary is between 43200 - 72000 £ per year.
At WTW, we are a leading global advisory, broking, and solutions company. We work with clients across a wide range of industries, helping them manage risk, optimise benefits, and improve performance. As a Fabric Data Engineer, you will play a key role in leveraging Microsoft Fabric, Azure, and Python to design and build advanced data solutions in the insurance domain.
Location: London, UK
Role: Hybrid Workstyle (Full-time)
Role Overview:
As a Fabric Data Engineer at WTW, you will take ownership of developing and optimising data pipelines, workflows, and ETL processes. You will work with cutting-edge technologies to ensure that data is efficiently processed, stored, and made accessible for analysis. This role is a key part of our data engineering team and requires specific expertise in Microsoft Fabric, Azure, and Python.
Key Responsibilities:
Fabric or Azure Data Engineer (Non-Negotiable):
- Lead the design and development of scalable data pipelines and ETL processes using Microsoft Fabric or Azure technologies.
- Manage and optimise notebooks, pipelines, and workflows to enhance the performance and efficiency of our data architecture.
Data Pipeline Development & ETL:
- Build and maintain high-quality ETL pipelines to clean, transform, and enrich data from various sources.
Ensure that pipelines are automated, scalable, and fault-tolerant to accommodate large volumes of data.
Experience with Notebooks, Pipelines, and Workflows: - Utilise Notebooks (e.g., Jupyter, Databricks) for data exploration, analysis, and reporting.
Design and optimise data workflows to streamline key processing tasks, enhancing operational efficiency.
API Integration & Data Ingestion: - Integrate external and internal APIs to ingest data into our systems, ensuring smooth and consistent data integration.
Automate the API data ingestion processes to enhance data consistency and quality.
AI Experience (Project-based):
- Contribute to projects involving AI, including integrating generative AI or machine learning models within our data workflows.
- Apply AI technologies to improve data processing and provide deeper insights.
SDLC Awareness:
- Adhere to Software Development Life Cycle (SDLC) best practices, including version control, testing, and continuous integration.
- Collaborate with the team to ensure code quality, review processes, and deployment practices.
Collaboration & Communication:
- Work closely with cross-functional teams and business stakeholders to understand and meet data requirements.
- Effectively communicate complex technical solutions to both technical and non-technical teams, ensuring alignment with business goals.
Qualifications
Required Qualifications:
Experience:
- A minimum of 2 years of hands-on experience working as a Data Engineer or Fabric Data Engineer, with expertise in Microsoft Fabric, Azure, and Python.
- Proven experience in designing and implementing ETL pipelines, managing notebooks, and optimising data workflows.
- Solid experience working with API integration and data ingestion from various sources.
Technical Skills:
- Proficiency in Python for building data pipelines, automation, and data manipulation.
- Expertise in Azure cloud services and Microsoft Fabric.
- Knowledge of ETL processes, data modelling, and data integration techniques.
- Understanding of AI technologies and experience working on AI-driven projects (advantageous).
- Familiarity with Power Automate for automating business processes (optional).
Soft Skills:
- Strong communication skills, with the ability to explain complex technical concepts to non-technical stakeholders.
- Excellent problem-solving abilities, with a focus on data quality and continuous improvement.
Self-motivated and organised, with the ability to manage time effectively and prioritise tasks.
Preferred Qualifications:
Industry Experience:
- Experience in the insurance domain or a strong understanding of industry-specific data requirements would be highly beneficial.
- Experience with Reporting & Data Visualisation:
- While not mandatory, familiarity with reporting tools or data visualisation (e.g., Power BI) is advantageous.
Why Join WTW?
- Make an Impact: Work on high-profile data engineering projects that shape the future of the insurance sector.
- Innovation: Join a forward-thinking company that uses cutting-edge technologies, including AI, Azure, and Microsoft Fabric.
- Career Development: Take the next step in your career and gain exposure to exciting data engineering challenges.
- Global Reach: Collaborate with a diverse, global team on projects that span multiple industries.
At WTW, we believe difference makes us stronger. We want our workforce to reflect the diverse and varied markets we operate in and to foster a culture of inclusivity where all colleagues feel welcome, valued, and empowered to bring their whole selves to work every day. We are an equal opportunities employer committed to fostering an inclusive work environment throughout our organisation. We embrace all types of diversity.
#J-18808-Ljbffr
Fabric Data Engineer employer: WTW
Contact Detail:
WTW Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Fabric Data Engineer
✨Tip Number 1
Familiarise yourself with Microsoft Fabric and Azure by exploring their official documentation and tutorials. This will not only enhance your technical knowledge but also demonstrate your commitment to the role during interviews.
✨Tip Number 2
Engage in online communities or forums related to data engineering, particularly those focused on Python and Azure. Networking with professionals in the field can provide valuable insights and potentially lead to referrals.
✨Tip Number 3
Consider working on personal projects that involve building ETL pipelines or integrating APIs using Azure services. Showcasing these projects in your portfolio can set you apart from other candidates.
✨Tip Number 4
Prepare for potential technical interviews by practising coding challenges related to data manipulation and pipeline development in Python. Being well-prepared will boost your confidence and performance during the interview process.
We think you need these skills to ace Fabric Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure to customise your CV to highlight your experience with Microsoft Fabric, Azure, and Python. Include specific examples of data pipelines and ETL processes you've developed, as well as any relevant projects in the insurance domain.
Craft a Compelling Cover Letter: Write a cover letter that showcases your passion for data engineering and your understanding of the insurance industry. Mention how your skills align with the responsibilities outlined in the job description, particularly your experience with API integration and AI technologies.
Highlight Relevant Projects: In your application, include details about specific projects where you utilised Notebooks, Pipelines, and Workflows. Discuss how you optimised data workflows and contributed to AI-driven projects, as this will demonstrate your hands-on experience.
Showcase Soft Skills: Don't forget to mention your soft skills, such as communication and problem-solving abilities. Provide examples of how you've effectively communicated complex technical concepts to non-technical stakeholders, as this is crucial for collaboration in the role.
How to prepare for a job interview at WTW
✨Showcase Your Technical Skills
Be prepared to discuss your experience with Microsoft Fabric, Azure, and Python in detail. Bring examples of past projects where you designed and implemented ETL pipelines or optimised data workflows, as this will demonstrate your hands-on expertise.
✨Understand the Insurance Domain
Familiarise yourself with the specific data requirements and challenges within the insurance industry. This knowledge will help you relate your technical skills to the role and show that you understand the business context.
✨Communicate Clearly
Practice explaining complex technical concepts in simple terms. You may need to communicate with non-technical stakeholders, so being able to convey your ideas clearly will be crucial during the interview.
✨Prepare for AI Discussions
Since the role involves contributing to AI projects, be ready to discuss any relevant experience you have with AI technologies or machine learning models. Highlight how you've integrated these into data workflows in previous roles.