At a Glance
- Tasks: Design and maintain scalable data models using SQL, Snowflake, and DBT.
- Company: Join First Derivative, a leader in data-driven financial services solutions.
- Benefits: Enjoy private healthcare, pension, and a cycle to work scheme.
- Why this job: Be part of a dynamic team transforming the financial landscape with cutting-edge technology.
- Qualifications: 2+ years of experience with SQL, Snowflake, and DBT required.
- Other info: Collaborate in an agile environment with opportunities for professional growth.
The predicted salary is between 43200 - 72000 £ per year.
This job is brought to you by Jobs/Redefined, the UK\’s leading over-50s age inclusive jobs board.
First Derivative is driven by people, data, and technology, unlocking the value of insight, hindsight, and foresight to drive organizations forward. Counting many of the world\’s leading investment banks as clients, we help our clients navigate the data-driven, digital revolution that is transforming the financial services sector. Our global teams span across 15 offices serving clients across EMEA, North America and APAC.
As an EPAM Systems, Inc. (NYSE: EPAM) company, a leading global provider of digital platform engineering and development services, we deliver advanced financial services solutions by empowering operational insights, driving innovation, and enabling more effective risk management in an increasingly data-centric world. Together with EPAM, we combine deep industry expertise with cutting-edge technology to help clients stay ahead in a rapidly evolving financial landscape, offering comprehensive solutions that drive business transformation and sustainable growth.
We are looking for an experienced Data Engineer with deep expertise in SQL, Snowflake, and DBT to support our ongoing data platform modernization initiative.
You will help design, implement, and maintain scalable, modular data models and transformations using modern tooling. This role is ideal for someone who understands how to work with complex data structures, including JSON, and build efficient, reusable DBT models that support analytical and operational use cases across the organization.
RESPONSIBILITIES
- Develop robust and reusable DBT models to transform and organize raw data into clean, well-structured datasets
- Write complex, efficient SQL queries including CTEs, stored procedures, views, and partitioning strategies
- Build relational models from semi-structured data (e.g., JSON) using SQL and DBT
- Work within the Snowflake platform to design performant, scalable data solutions
- Optimize use of Snowflake Virtual Warehouses, manage data sharing, and understand cost/performance trade-offs
- Collaborate with analysts, data scientists, and engineering teams to ensure consistent and reliable data delivery
- Participate in code reviews and contribute to best practices around data modeling, transformation logic, and documentation
REQUIREMENTS
- Excellent SQL skills – demonstrated expertise with CTEs, procedures, partitioning strategies, and creating views
- Strong working knowledge of Snowflake, including virtual warehouses, data sharing, and querying JSON and semi-structured data
- Minimum 2 years\’ experience with DBT, including building and managing reusable transformation models
- Proven ability to model and transform complex data sources (especially JSON) into structured relational models
- Familiarity with version control (e.g., Git), testing frameworks, and deployment practices within DBT
- Strong understanding of performance optimization and cost-awareness in a cloud data warehouse context
- Experience working in a collaborative, agile environment
- Financial services or regulated industry experience is a plus
WE OFFER
- Private Healthcare Package
- Pension
- Employee Assistance Programme
- Enhanced Maternity policy
- Group Life Protection Benefit
- Give as You Earn
- Cycle to Work Scheme
- Employee Referral Bonus Scheme
- Diversity Networks
- Access to a range of skills and certifications
#J-18808-Ljbffr
Senior Data Engineer (SQL, Snowflake, DBT) employer: EPAM
Contact Detail:
EPAM Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer (SQL, Snowflake, DBT)
✨Tip Number 1
Familiarise yourself with the latest trends and best practices in data engineering, particularly around SQL, Snowflake, and DBT. This knowledge will not only help you during interviews but also demonstrate your commitment to staying current in a rapidly evolving field.
✨Tip Number 2
Network with professionals in the financial services sector who are already working with data platforms. Engaging in conversations on platforms like LinkedIn can provide insights into the company culture and expectations, which can be invaluable during the interview process.
✨Tip Number 3
Prepare to discuss specific projects where you've successfully implemented SQL, Snowflake, or DBT solutions. Be ready to explain your thought process, the challenges you faced, and how you overcame them, as this will showcase your problem-solving skills.
✨Tip Number 4
Demonstrate your understanding of performance optimisation and cost-awareness in cloud data warehousing. Being able to articulate how you've managed these aspects in previous roles will set you apart from other candidates.
We think you need these skills to ace Senior Data Engineer (SQL, Snowflake, DBT)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with SQL, Snowflake, and DBT. Use specific examples of projects where you've developed robust DBT models or optimised data solutions to demonstrate your expertise.
Craft a Compelling Cover Letter: In your cover letter, explain why you're passionate about data engineering and how your skills align with the responsibilities outlined in the job description. Mention your experience with complex data structures and your collaborative work style.
Showcase Relevant Projects: If you have any relevant projects or case studies, include them in your application. This could be links to GitHub repositories or detailed descriptions of past work that involved SQL, Snowflake, or DBT.
Proofread Your Application: Before submitting, carefully proofread your application for any spelling or grammatical errors. A polished application reflects your attention to detail, which is crucial in data engineering roles.
How to prepare for a job interview at EPAM
✨Showcase Your SQL Mastery
Be prepared to discuss your experience with SQL in detail. Highlight specific projects where you wrote complex queries, used CTEs, or created views. This will demonstrate your expertise and ability to handle the technical demands of the role.
✨Demonstrate DBT Knowledge
Since the role requires a strong background in DBT, come ready to explain how you've built and managed reusable transformation models. Share examples of how you've optimised data transformations and any challenges you've overcome using DBT.
✨Understand Snowflake's Capabilities
Familiarise yourself with Snowflake's features, especially around virtual warehouses and data sharing. Be ready to discuss how you've leveraged these capabilities in past roles to create scalable data solutions.
✨Collaborative Mindset
This position involves working closely with analysts and data scientists. Prepare to talk about your experience in collaborative environments, how you handle feedback during code reviews, and your approach to ensuring reliable data delivery across teams.