At a Glance
- Tasks: Design and deliver a modern cloud-first data platform for a leading financial organisation.
- Company: Join a top player in the UK financial sector focused on data transformation.
- Benefits: Hybrid work model, competitive salary, and opportunities for professional growth.
- Other info: Collaborative environment with a focus on mentorship and strategic alignment.
- Why this job: Make a real impact by driving innovation in data engineering.
- Qualifications: Experience with Azure services, SQL, Python, and strong data architecture knowledge.
The predicted salary is between 60000 - 80000 £ per year.
A leading organisation within the UK financial sector is embarking on a major data transformation programme. They are looking for an experienced Data Engineer to help design and deliver a modern, cloud‑first data platform that will underpin some of the organisation’s most critical functions.
- Design, build and deploy scalable, secure data solutions using Azure Databricks, Data Factory and Data Lake Storage.
- Develop and optimise advanced data pipelines with Python, SQL, Spark/PySpark and Delta Lake.
- Champion strong data quality, governance and observability practices.
- Extensive experience with Azure services including Azure Databricks, Azure Data Lake Storage, and Azure Data Factory.
- Advanced proficiency in SQL, Python, and Spark (PySpark), with a strong focus on performance optimization and distributed processing.
- Strong understanding of data architecture principles and cloud-native design patterns.
- Demonstrated ability to lead technical delivery, mentor engineering teams and collaborate with stakeholders to ensure alignment between data solutions and business strategy.
- Proficiency in Linux/Unix environments and shell scripting.
- Deep understanding of source control, testing strategies, and agile development practices.
- Self-motivated with a strategic mindset and a passion for driving innovation in data engineering.
- Experience delivering data pipelines on Hortonworks/Cloudera on-prem and leading cloud migration initiatives.
- Familiarity with Apache Airflow, data modelling, and metadata management.
- Experience influencing enterprise data strategy and contributing to architectural governance.
Ingegnere dei dati (f/m) employer: Lorien
Contact Detail:
Lorien Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Ingegnere dei dati (f/m)
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, attend meetups, and engage in online forums. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving Azure Databricks, SQL, and Python. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge and soft skills. Be ready to discuss your experience with data pipelines and cloud migration initiatives, and don’t forget to highlight your ability to mentor and lead teams.
✨Tip Number 4
Apply through our website! We’ve got loads of opportunities that match your skills. Plus, it’s a great way to ensure your application gets seen by the right people. Don’t miss out!
We think you need these skills to ace Ingegnere dei dati (f/m)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Azure services, Python, and SQL, and don’t forget to mention any relevant projects that showcase your skills in building scalable data solutions.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how your background aligns with our needs. Be sure to mention your experience with cloud-native design patterns and data governance.
Showcase Your Technical Skills: In your application, be specific about your technical skills. Mention your proficiency in tools like Azure Databricks and Data Factory, and provide examples of how you've optimised data pipelines or led technical delivery in past roles.
Apply Through Our Website: We encourage you to apply through our website for a smoother process. It helps us keep track of your application and ensures you don’t miss out on any important updates from us!
How to prepare for a job interview at Lorien
✨Know Your Tech Stack
Make sure you’re well-versed in Azure Databricks, Data Factory, and Data Lake Storage. Brush up on your Python, SQL, and Spark skills, as these will be crucial during technical discussions. Be ready to showcase your experience with performance optimisation and distributed processing.
✨Showcase Your Projects
Prepare to discuss specific projects where you've designed and deployed data solutions. Highlight your role in leading technical delivery and mentoring teams. Use examples that demonstrate your ability to align data solutions with business strategy, as this will resonate well with the interviewers.
✨Understand Data Governance
Familiarise yourself with data quality, governance, and observability practices. Be prepared to discuss how you’ve championed these principles in past roles. This shows that you not only understand the technical side but also the importance of maintaining high standards in data management.
✨Be Agile and Adaptable
Since the role involves working in an agile environment, be ready to talk about your experience with agile development practices. Discuss how you’ve adapted to changes in project scope or technology, and how you’ve used tools like GitHub Actions and Azure DevOps to streamline processes.