At a Glance
- Tasks: Design and implement scalable data solutions using AWS and Snowflake.
- Company: Join a dynamic team in the heart of London, focused on innovative data engineering.
- Benefits: Enjoy hybrid working, competitive salary, and a 15% bonus.
- Why this job: Be part of a culture that values collaboration, learning, and impactful data solutions.
- Qualifications: Experience with AWS, Snowflake, and strong programming skills are essential.
- Other info: Opportunity to mentor others and contribute to exciting projects.
The predicted salary is between 60000 - 80000 £ per year.
Senior Data Engineer
MUST HAVE SNOWFLAKE, AWS,
Salary – £70-80k with 15% bonus
Hybrid working – couple of days in the office
City of London
We are looking for:
- Good understanding of data engineering principles
- A good technical grasp of Snowflake and automating it, and transforming complex datasets
- AWS Skillset
- Delivery experience
- Building solutions in snowflake
- implementing data warehousing solutions using Snowflake and AWS
- Hands-on experience with AWS services such as Glue (Spark), Lambda, Step Functions, ECS, Redshift, and SageMaker.
-
Enthusiasm for cross-functional work and adaptability beyond traditional data engineering.
-
Examples like building APIs, integrating with microservices, or contributing to backend systems – not just data pipelines or data modelling.
-
Mention on tools like GitHub Actions, Jenkins, AWS CDK, CloudFormation, Terraform.
MUST HAVE SNOWFLAKE, AWS
Key Responsibilities:
- Design and implement scalable, secure, and cost-efficient data solutions on AWS, leveraging services such as Glue, Lambda, S3, Redshift, and Step Functions.
- Lead the development of robust data pipelines and analytics platforms, ensuring high availability, performance, and maintainability.
- Demonstrate proficiency in software engineering principles, contributing to the development of reusable libraries, APIs, and infrastructure-as-code components that support the broader data and analytics ecosystem.
- Contribute to the evolution of the team\’s data engineering standards and best practices, including documentation, testing, and architectural decisions.
- Develop and maintain data models and data marts that support self-service analytics and enterprise reporting.
- Drive automation and CI/CD practices for data workflows, ensuring reliable deployment and monitoring of data infrastructure.
- Ensure data quality, security, and compliance with internal policies and external regulations.
- Continuously optimize data processing workflows for performance and cost, using observability tools and performance metrics.
- Collaborate cross-functionally with DevOps, analytical engineers, data analysts, and business stakeholders to align data solutions with product and business goals.
- Mentor and support team members through code reviews, pair programming, and knowledge sharing, fostering a culture of continuous learning and engineering excellence.
Skills and Experience:
- Proven experience as a data engineer with strong hands-on programming skills and software engineering fundamentals, with experience building scalable solutions in cloud environments (AWS preferred)
- Extensive experience in AWS services, e.g. EC2, S3, RDS, DynamoDB, Redshift, Lambda, API Gateway
- Solid foundation in software engineering principles, including version control (Git), testing, CI/CD, modular design, and clean code practices. Experience developing reusable components and APIs is a strong plus.
- Advanced SQL skills for complex data queries and transformations
- Proficiency in at least one programming language, with Python strongly preferred for data processing, automation, and pipeline development
- AWS or Snowflake certifications are a plus
- Hands-on experience with AWS services such as Glue (Spark), Lambda, Step Functions, ECS, Redshift, and SageMaker.
-
Enthusiasm for cross-functional work and adaptability beyond traditional data engineering.
-
Examples like building APIs, integrating with microservices, or contributing to backend systems – not just data pipelines or data modelling.
-
Mentions on tools like GitHub Actions, Jenkins, AWS CDK, CloudFormation, Terraform. Personal contribution and not handled by DevOps for candidate\’s projects
Senior Data Engineer in City of London employer: I3 Resourcing Limited
Contact Detail:
I3 Resourcing Limited Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer in City of London
✨Tip Number 1
Familiarise yourself with the latest features and best practices of Snowflake and AWS. Join online forums or communities where data engineers discuss their experiences and solutions, as this can provide you with insights that are highly relevant to the role.
✨Tip Number 2
Showcase your hands-on experience by working on personal projects that involve building data pipelines or APIs using AWS services. This practical experience will not only enhance your skills but also give you concrete examples to discuss during interviews.
✨Tip Number 3
Network with professionals in the data engineering field, especially those who work with Snowflake and AWS. Attend meetups or webinars to connect with potential colleagues and learn about the latest trends and challenges in the industry.
✨Tip Number 4
Prepare to discuss your approach to cross-functional collaboration. Think of specific examples where you've worked with DevOps, analysts, or other stakeholders to deliver data solutions, as this is a key aspect of the role we're looking to fill.
We think you need these skills to ace Senior Data Engineer in City of London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with Snowflake and AWS, as these are essential for the role. Include specific projects where you've implemented data warehousing solutions or built scalable data pipelines.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the position and the company. Mention your hands-on experience with AWS services and how you've contributed to cross-functional teams in previous roles.
Showcase Relevant Projects: Include examples of your work that demonstrate your technical skills, such as building APIs or integrating with microservices. Highlight any experience with tools like GitHub Actions, Jenkins, or Terraform.
Highlight Soft Skills: Don't forget to mention your ability to mentor team members and collaborate across functions. This role requires adaptability and a commitment to continuous learning, so showcase these qualities in your application.
How to prepare for a job interview at I3 Resourcing Limited
✨Showcase Your Technical Skills
Make sure to highlight your hands-on experience with Snowflake and AWS during the interview. Be prepared to discuss specific projects where you've implemented data warehousing solutions or built scalable data pipelines, as this will demonstrate your technical grasp of the required tools.
✨Demonstrate Problem-Solving Abilities
Employ examples from your past work where you faced challenges in data engineering and how you overcame them. This could include optimising data processing workflows or ensuring data quality and compliance, showcasing your ability to think critically and adapt.
✨Emphasise Cross-Functional Collaboration
Since the role requires collaboration with various teams, share experiences where you've worked alongside DevOps, analysts, or business stakeholders. Highlight how you aligned data solutions with broader business goals, which shows your adaptability beyond traditional data engineering.
✨Prepare for Technical Questions
Expect to be asked about specific AWS services and data engineering principles. Brush up on your knowledge of tools like Glue, Lambda, and Redshift, and be ready to explain how you've used them in your previous roles. This preparation will help you answer confidently and accurately.