At a Glance
- Tasks: Design and build innovative data pipelines using Azure Data Platform and PySpark.
- Company: Join Fractal, a strategic AI partner for Fortune 500 companies with a vibrant culture.
- Benefits: Enjoy competitive salary, health benefits, and opportunities for professional growth.
- Why this job: Make an impact by empowering imagination with intelligence in a dynamic environment.
- Qualifications: Experience in data engineering and strong collaboration skills are essential.
- Other info: Be part of a Great Place to Work with excellent career advancement opportunities.
The predicted salary is between 36000 - 60000 £ per year.
It’s fun to work in a company where people truly BELIEVE in what they are doing! We’re committed to bringing passion and customer focus to the business.
Fractal is a strategic AI partner to Fortune 500 companies with a vision to power every human decision in the enterprise. Fractal is building a world where individual choices, freedom, and diversity are the greatest assets. An ecosystem where human imagination is at the heart of every decision. Where no possibility is written off, only challenged to get better. We believe that a true Fractalite is the one who empowers imagination with intelligence. Fractal has been featured as a Great Place to Work by The Economic Times in partnership with the Great Place to Work® Institute and recognized as a Cool Vendor and a Vendor to Watch by Gartner.
Location: London
Core Technical Responsibilities- Design and build end-to-end data pipelines (batch and near real-time) using PySpark, Databricks, and Azure Data Platform (ADF, ADLS, Synapse).
- Be hands-on in development, debugging, optimization, and production support of data pipelines.
- Work with or extend existing/proprietary ETL frameworks (e.g., Mar's Simpel or similar) and improve performance and reliability.
- Implement data modeling, transformation, and orchestration patterns aligned with best practices.
- Apply data engineering fundamentals including partitioning, indexing, caching, cost optimization, and performance tuning.
- Collaborate with upstream and downstream teams to ensure data quality, reliability, and SLAs.
- Contribute to the design of cloud-native data architectures covering ingestion, processing, storage, and consumption.
- Translate business and analytical requirements into practical, scalable data solutions.
- Support data governance practices including metadata, lineage, data quality checks, and access controls.
- Work within hybrid environments (on-prem to cloud) and support modernization initiatives.
- Understand and apply data mesh concepts where relevant (domain ownership, reusable data products, basic contracts).
- Evaluate tools and frameworks with a build vs. buy mindset, recommending pragmatic solutions.
- Act as a technical anchor for a data engineering team.
- Provide technical guidance, code reviews, and mentoring to engineers.
- Own delivery for assigned data products or pipelines — from design through deployment.
- Collaborate with product owners, analysts, and architects to clarify requirements and priorities.
- Engage with business and analytics stakeholders to understand data needs and translate them into technical solutions.
- Clearly communicate technical designs and trade-offs to both technical and non-technical audiences.
- Escalate risks and propose mitigation strategies proactively.
- Support documentation of architecture, pipelines, and operational processes.
- Exposure to AI/ML data workflows (feature engineering, model inputs, MLOps basics).
- Awareness of LLMs / Agentic AI architectures from a data platform perspective.
- Experience with other platforms such as AWS, GCP, Snowflake, BigQuery, Redshift.
- Familiarity with data governance or catalog tools (DataHub, Collibra, dbt, etc.).
- Experience working in CPG, Retail, Supply Chain, or similar domains.
Data Architect - Azure Data Engineering in London employer: Fractal
Contact Detail:
Fractal Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Architect - Azure Data Engineering in London
✨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups, and connect with potential colleagues on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines, projects, or any relevant work you've done. This gives you a chance to demonstrate your expertise beyond just words on a CV.
✨Tip Number 3
Prepare for interviews by practising common questions related to data architecture and engineering. Think about how you can relate your past experiences to the role at Fractal, especially around Azure Data Platform and ETL frameworks.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in joining our team at Fractal.
We think you need these skills to ace Data Architect - Azure Data Engineering in London
Some tips for your application 🫡
Show Your Passion: When you're writing your application, let your enthusiasm for data engineering shine through! We want to see that you truly believe in the power of data and how it can drive decisions. Share why you're excited about the role and how you can contribute to our mission.
Tailor Your CV: Make sure your CV is tailored to the Data Architect position. Highlight your experience with Azure Data Platform, PySpark, and any relevant projects you've worked on. We love seeing specific examples that demonstrate your skills and how they align with what we're looking for.
Be Clear and Concise: Keep your application clear and to the point. Use straightforward language to explain your technical skills and experiences. We appreciate when candidates communicate effectively, especially when it comes to complex topics like data architecture.
Apply Through Our Website: Don't forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it gives you a chance to explore more about our company and culture while you’re at it!
How to prepare for a job interview at Fractal
✨Know Your Tech Inside Out
Make sure you’re well-versed in the technologies mentioned in the job description, like PySpark, Databricks, and Azure Data Platform. Brush up on your data pipeline design skills and be ready to discuss how you've implemented these tools in past projects.
✨Showcase Your Problem-Solving Skills
Prepare to share specific examples of how you've tackled challenges in data engineering. Think about times when you optimised performance or improved data quality, and be ready to explain your thought process and the impact of your solutions.
✨Communicate Clearly
Practice explaining complex technical concepts in simple terms. You’ll likely need to engage with both technical and non-technical stakeholders, so being able to bridge that gap is crucial. Consider doing mock interviews to refine your communication style.
✨Understand the Company Culture
Research Fractal’s values and mission. Be prepared to discuss how your personal values align with theirs, especially around empowering imagination with intelligence. Showing that you resonate with their culture can set you apart from other candidates.