At a Glance
- Tasks: Design and implement data pipelines using Azure technologies to support innovative investment solutions.
- Company: Join Pantheon, a leader in private markets investing with a global presence.
- Benefits: Competitive salary, inclusive culture, and opportunities for professional growth.
- Why this job: Be part of a cutting-edge data platform project that shapes the future of finance.
- Qualifications: 5+ years in data engineering, strong skills in Python, SQL, and Azure.
- Other info: Collaborative team environment with a focus on mentorship and career development.
The predicted salary is between 48000 - 72000 £ per year.
Pantheon has been at the forefront of private markets investing for more than 40 years, earning a reputation for providing innovative solutions covering the full lifecycle of investments, from primary fund commitments to co-investments and secondary purchases, across private equity, real assets and private credit. We have partnered with more than 650 clients, including institutional investors of all sizes as well as a growing number of private wealth advisers and investors, with approximately $65bn in discretionary assets under management (as of December 31, 2023). Leveraging our specialized experience and global team of professionals across Europe, the Americas and Asia, we invest with purpose and lead with expertise to build secure financial futures.
Pantheon is undergoing a multi-year program to build out a new best in class Data Platform using cloud native technologies hosted in Azure. We require an experienced and passionate hands-on Senior Data Engineer to design and implement new data pipelines for adaptation to business and/or technology changes. This role will be integral to the success of this program and establishing Pantheon as a data-centric organisation. You will be working with a modern Azure tech stack and proven experience of ingesting and transforming data from a variety of internal and external systems is core to the role. You will be part of a small and highly skilled team, and you will need to be passionate about providing best in class solutions to our global user base.
Key Responsibilities
- Design, build, and maintain scalable, secure, and high-performance data pipelines on Azure, primarily using Azure Databricks, Azure Data Factory, and Azure Functions.
- Develop and optimise batch and streaming data processing solutions using PySpark and SQL to support analytics, reporting, and downstream data products.
- Implement robust data transformation layers using dbt, ensuring well-structured, tested, and documented analytical models.
- Collaborate closely with business analysts, QA teams, and business stakeholders to translate data requirements into reliable technical solutions.
- Ensure data quality, reliability, and observability through automated testing, monitoring, logging, and alerting.
- Lead on performance tuning, cost optimisation, and capacity planning across Databricks and associated Azure services.
- Implement and maintain CI/CD pipelines using Azure DevOps, promoting best practices for version control, automated testing, and deployment.
- Enforce data governance, security, and compliance standards, including access controls, data lineage, and auditability.
- Contribute to architectural decisions and provide technical leadership, mentoring junior engineers and setting engineering standards.
- Produce clear technical documentation and contribute to knowledge sharing across the data engineering function.
Knowledge & Experience Required
- Essential Technical Skills
- Python and PySpark for large-scale data processing.
- SQL (advanced querying, optimisation, and data modelling).
- Azure Data Factory (pipeline orchestration and integration).
- Azure DevOps (Git, CI/CD pipelines, release management).
- Azure Functions / serverless data processing patterns.
- Data modelling (star schemas, data vault, or lakehouse-aligned approaches).
- Data quality, testing frameworks, and monitoring/observability.
- Strong problem-solving ability and a pragmatic, engineering-led mindset.
- Experience in Agile SW development environment.
- Excellent communication skills, with the ability to explain complex technical concepts to both technical and non-technical stakeholders.
- Leadership and mentoring capability, with a focus on raising engineering standards and best practices.
- Significant commercial experience (typically 5+ years) in data engineering roles, with demonstrable experience designing and operating production-grade data platforms.
- Strong hands-on experience with Azure Databricks, including cluster configuration, job orchestration, and performance optimisation.
- Proven experience building data pipelines with Databricks and Azure Data Factory; integrating with Azure-native services (e.g. Data Lake Storage Gen2, Azure Functions).
- Advanced experience with Python for data engineering, including PySpark for distributed data processing.
- Strong SQL expertise, with experience designing and optimising complex analytical queries and data models.
- Practical experience using dbt in a production environment, including model design, testing, documentation, and deployment.
- Experience implementing CI/CD pipelines using Azure DevOps or equivalent tooling.
- Solid understanding of data warehousing and lakehouse architectures, including dimensional modelling and modern analytics patterns.
- Experience working in agile delivery environments and collaborating with cross-functional teams.
- Exposure to cloud security, data governance, and compliance concepts within Azure.
Desired Experience
- Power BI and DAX.
- Business Objects Reporting.
This job description is not to be construed as an exhaustive statement of duties, responsibilities, or requirements. You may be required to perform other job-related duties as reasonably requested by your manager.
Pantheon is an Equal Opportunities employer, we are committed to building a diverse and inclusive workforce so if you're excited about this role but your past experience doesn't perfectly align we'd still encourage you to apply.
Equal Opportunity & Privacy
We are committed to ensuring that all candidates have an equal opportunity to participate in the recruitment process. If you require any reasonable adjustments to accommodate your needs, please describe the adjustments you require in your application.
Senior Data Engineer (Temp) in City of London employer: Pantheon
Contact Detail:
Pantheon Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer (Temp) in City of London
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who work with Azure. A friendly chat can lead to insider info about job openings or even referrals.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your best data pipelines and projects. Use platforms like GitHub to share your code and demonstrate your expertise in Python, PySpark, and Azure.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions. Be ready to discuss your experience with Azure Databricks and how you've tackled challenges in past projects. Practice makes perfect!
✨Tip Number 4
Don't forget to apply through our website! We love seeing candidates who are genuinely interested in joining our team. Plus, it’s a great way to ensure your application gets the attention it deserves.
We think you need these skills to ace Senior Data Engineer (Temp) in City of London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the Senior Data Engineer role. Highlight your experience with Azure, Python, and data pipelines to show us you’re the right fit!
Craft a Compelling Cover Letter: Use your cover letter to tell us why you're passionate about data engineering and how your background aligns with our mission at Pantheon. Be genuine and let your personality shine through!
Showcase Your Projects: If you've worked on relevant projects, don’t hesitate to include them! We love seeing real-world applications of your skills, especially with Azure Databricks and data transformation.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for this exciting opportunity!
How to prepare for a job interview at Pantheon
✨Know Your Tech Stack
Make sure you’re well-versed in Azure technologies, especially Azure Databricks and Data Factory. Brush up on your Python and PySpark skills, as you'll likely be asked to demonstrate your ability to build and optimise data pipelines.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific challenges you've faced in previous roles and how you tackled them. Companies like Pantheon value a pragmatic, engineering-led mindset, so be ready to share examples that highlight your analytical thinking.
✨Communicate Clearly
You’ll need to explain complex technical concepts to both technical and non-technical stakeholders. Practice articulating your thoughts clearly and concisely, as effective communication is key in collaborative environments.
✨Demonstrate Leadership Potential
Even if you're not applying for a managerial role, showing that you can mentor others and contribute to team standards will set you apart. Think of instances where you've led projects or helped junior engineers, and be prepared to discuss these experiences.