At a Glance
- Tasks: Design and build scalable data processing pipelines using AWS Glue and SQL.
- Company: Join First Derivative, a leader in data-driven financial services solutions.
- Benefits: Enjoy private healthcare, pension plans, and a cycle to work scheme.
- Why this job: Be part of a dynamic team transforming the financial sector with cutting-edge technology.
- Qualifications: Strong SQL skills and experience with AWS Glue or similar tools required.
- Other info: Diversity networks and access to skills certifications available.
The predicted salary is between 43200 - 72000 £ per year.
This job is brought to you by Jobs/Redefined, the UK\’s leading over-50s age inclusive jobs board.
First Derivative is driven by people, data, and technology, unlocking the value of insight, hindsight, and foresight to drive organizations forward. Counting many of the world\’s leading investment banks as clients, we help our clients navigate the data-driven, digital revolution that is transforming the financial services sector. Our global teams span across 15 offices serving clients across EMEA, North America and APAC.
As an EPAM Systems, Inc. (NYSE: EPAM) company, a leading global provider of digital platform engineering and development services, we deliver advanced financial services solutions by empowering operational insights, driving innovation, and enabling more effective risk management in an increasingly data-centric world. Together with EPAM, we combine deep industry expertise with cutting-edge technology to help clients stay ahead in a rapidly evolving financial landscape, offering comprehensive solutions that drive business transformation and sustainable growth.
We are seeking a highly skilled Data Engineer with strong SQL capabilities and hands-on experience with AWS Glue or equivalent Spark-based tools (e.g., Databricks).
You will be a key contributor in our Data Modernization initiative, helping to design and build scalable data processing pipelines that support our AWS-based data lake. The role involves working with large-scale datasets, optimizing for performance through techniques like partitioning, and delivering clean, reliable data to downstream consumers.
RESPONSIBILITIES
- Develop and maintain robust ETL pipelines using AWS Glue (Apache Spark) or Databricks
- Write complex SQL queries, including Common Table Expressions (CTEs), stored procedures, and views, for data transformation and analysis
- Design and implement effective partitioning strategies in Glue, Athena, and other AWS-native tools to optimize performance and cost
- Ingest, clean, and transform structured and semi-structured data from multiple sources into the AWS data lake
- Collaborate with stakeholders to understand data requirements and deliver well-structured, high-quality datasets
- Troubleshoot performance issues in data pipelines and contribute to tuning and optimization
- Support data governance, lineage, and monitoring initiatives to ensure data quality and reliability
REQUIREMENTS
- Excellent SQL skills – advanced experience writing performant queries using CTEs, procedures, and views
- Hands-on experience with AWS Glue (Spark-based ETL), or similar platforms like Apache Spark or Databricks
- Strong understanding of partitioning techniques for large-scale datasets in both databases and data lake environments (e.g., Glue, Athena, Spark)
- Familiarity with cloud data lake architectures and AWS data ecosystem (S3, Athena, Glue, etc.)
- Comfortable working with large volumes of data and optimizing jobs for performance and cost
- Experience in a collaborative environment, with the ability to communicate effectively across technical and non-technical teams
- Financial services experience is a plus, especially familiarity with reference, counterparty, or instrument data
WE OFFER
- Private Healthcare Package
- Pension
- Employee Assistance Programme
- Enhanced Maternity policy
- Group Life Protection Benefit
- Give as You Earn
- Cycle to Work Scheme
- Employee Referral Bonus Scheme
- Diversity Networks
- Access to a range of skills and certifications
#J-18808-Ljbffr
Senior Data Engineer (SQL) employer: EPAM
Contact Detail:
EPAM Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer (SQL)
✨Tip Number 1
Familiarise yourself with AWS Glue and Databricks by exploring their documentation and tutorials. Hands-on experience is crucial, so consider setting up a small project to practice building ETL pipelines using these tools.
✨Tip Number 2
Join online communities or forums related to data engineering and AWS technologies. Engaging with others in the field can provide insights into best practices and common challenges, which can be beneficial during interviews.
✨Tip Number 3
Brush up on your SQL skills, particularly focusing on writing complex queries with CTEs and stored procedures. Practising these will not only enhance your technical abilities but also prepare you for potential technical assessments.
✨Tip Number 4
Research First Derivative and EPAM Systems to understand their business model and the specific challenges they face in the financial services sector. Tailoring your discussions around their needs can set you apart during the interview process.
We think you need these skills to ace Senior Data Engineer (SQL)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your SQL skills and experience with AWS Glue or similar tools. Use specific examples of projects where you've developed ETL pipelines or worked with large datasets.
Craft a Compelling Cover Letter: In your cover letter, explain why you're interested in the Senior Data Engineer position and how your background aligns with the responsibilities outlined in the job description. Mention any relevant financial services experience if applicable.
Showcase Technical Skills: When detailing your experience, focus on your proficiency with SQL, including complex queries and partitioning strategies. Highlight any hands-on experience with AWS services like Glue, Athena, or Databricks.
Prepare for Technical Questions: Anticipate technical questions related to data engineering, SQL performance tuning, and AWS tools. Be ready to discuss your problem-solving approach and provide examples from your past work.
How to prepare for a job interview at EPAM
✨Showcase Your SQL Skills
Be prepared to demonstrate your advanced SQL capabilities. Expect to discuss complex queries, including Common Table Expressions (CTEs), stored procedures, and views. You might even be asked to solve a problem on the spot, so brush up on your query writing skills!
✨Familiarise Yourself with AWS Tools
Since the role requires hands-on experience with AWS Glue or similar tools, make sure you understand how these platforms work. Be ready to discuss your experience with data ingestion, transformation, and the specific techniques you've used to optimise performance.
✨Understand Data Partitioning Techniques
Partitioning is crucial for optimising large-scale datasets. Be prepared to explain your understanding of partitioning strategies in AWS environments like Glue and Athena. Discuss any past experiences where you successfully implemented these techniques.
✨Communicate Effectively
This role involves collaboration with both technical and non-technical teams. Practice explaining complex data concepts in simple terms. Highlight any previous experiences where you successfully communicated data requirements or findings to stakeholders.