Senior Data Engineer

Senior Data Engineer

Full-Time 80000 - 80000 £ / year (est.) Home office (partial)
Fynity

At a Glance

  • Tasks: Design and implement ETL pipelines using cutting-edge big data technologies.
  • Company: Join a forward-thinking consultancy driving digital transformation.
  • Benefits: Competitive salary, remote work, and collaboration with skilled professionals.
  • Other info: Dynamic environment with excellent career growth opportunities.
  • Why this job: Make a real impact on high-profile government projects with innovative data solutions.
  • Qualifications: Experience in Python, Spark, and AWS services required.

The predicted salary is between 80000 - 80000 £ per year.

Location: Remote/ London, occasional visits to London

Salary: Up to £80,000

Start Date: ASAP

About the Role

Join a dynamic Digital Transformation Consultancy as a Data Engineer and play a pivotal role in delivering innovative, data‑driven solutions for high‑profile government clients. You’ll be responsible for designing and implementing robust ETL pipelines, leveraging cutting‑edge big data technologies, and driving excellence in cloud‑based data engineering. This role offers the opportunity to work with leading technologies, collaborate with data architects and scientists, and make a significant impact in a fast‑paced, challenging environment.

Key Responsibilities

  • Design, implement, and debug ETL pipelines to process and manage complex datasets.
  • Leverage big data tools, including Apache Kafka, Spark, and Airflow, to deliver scalable solutions.
  • Collaborate with stakeholders to ensure data quality and alignment with business goals.
  • Utilize programming expertise in Python, Scala, and SQL for efficient data processing.
  • Build data pipelines using cloud‑native services on AWS, including Lambda, Glue, Redshift, and API Gateway.
  • Monitor and optimise data solutions using AWS CloudWatch and other tools.

What We’re Looking For

  • Good hands‑on experience of designing, implementing, debugging ETL pipeline.
  • Expertise in Python, PySpark and SQL languages.
  • Expertise with Spark and Airflow.
  • Experience of designing data pipelines using cloud native services on AWS.
  • Extensive knowledge of AWS services like API Gateway, Lambda, Redshift, Glue, Cloudwatch, etc.
  • Iac experience of deploying AWS resources using terraform.
  • Hands‑on experience of setting up CI/CD workflows using GitHub Actions.

SC Clearance Criteria

Must be a British Citizen or have resided in the UK for at least 5 consecutive years. Detailed employment history for the past 10 years or longer may be required.

Why Join Us?

  • Be part of a forward‑thinking consultancy driving digital transformation for industry leaders.
  • Work with the latest big data and cloud technologies.
  • Collaborate with a team of skilled professionals in a fast‑paced and rewarding environment.

If you’re passionate about delivering impactful data solutions and meet the criteria for this role, we’d love to hear from you. Apply today and lead the way in digital transformation!

Senior Data Engineer employer: Fynity

Fynity is an exceptional employer, offering a vibrant work culture that fosters innovation and collaboration in the cloud technology sector. As a Senior Data Engineer, you will have the opportunity to work remotely while occasionally visiting London, allowing for a flexible work-life balance. With access to cutting-edge technologies and a commitment to employee growth, Fynity empowers its team members to excel in their careers while making a significant impact on high-profile government projects.
Fynity

Contact Detail:

Fynity Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Senior Data Engineer

✨Network Like a Pro

Get out there and connect with people in the industry! Attend meetups, webinars, or even just grab a coffee with someone who works in data engineering. Building relationships can lead to job opportunities that aren’t even advertised.

✨Show Off Your Skills

Create a portfolio showcasing your projects, especially those involving ETL pipelines and big data tools like Apache Kafka and Spark. Having tangible examples of your work can really impress potential employers and set you apart from the crowd.

✨Ace the Interview

Prepare for technical interviews by brushing up on your Python, SQL, and cloud services knowledge. Practice common interview questions and be ready to discuss your past projects in detail. Confidence and clarity can make a huge difference!

✨Apply Through Our Website

Don’t forget to apply directly through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love hearing from passionate candidates who are eager to join our team.

We think you need these skills to ace Senior Data Engineer

ETL Pipeline Design and Implementation
Apache Kafka
Apache Spark
Apache Airflow
Python
PySpark
SQL
AWS Services (Lambda, Glue, Redshift, API Gateway, CloudWatch)
Infrastructure as Code (IaC) with Terraform
CI/CD Workflows using GitHub Actions
Data Quality Assurance
Collaboration with Stakeholders
Data Processing and Management
Big Data Technologies

Some tips for your application 🫡

Tailor Your CV: Make sure your CV is tailored to the role of Senior Data Engineer. Highlight your experience with ETL pipelines, big data tools like Apache Kafka and Spark, and any cloud services you've worked with, especially AWS.

Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how your skills align with our mission at StudySmarter. Don’t forget to mention specific projects or achievements that showcase your expertise.

Showcase Relevant Skills: Be sure to emphasise your programming skills in Python, Scala, and SQL. Mention any hands-on experience with CI/CD workflows and Iac using Terraform, as these are key for the role. We want to see how you can contribute to our team!

Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re keen on joining our team!

How to prepare for a job interview at Fynity

✨Know Your Tech Stack

Make sure you’re well-versed in the technologies mentioned in the job description, like Apache Kafka, Spark, and AWS services. Brush up on your Python, PySpark, and SQL skills, as you might be asked to solve technical problems or discuss your past projects using these tools.

✨Showcase Your Problem-Solving Skills

Prepare to discuss specific challenges you've faced while designing and implementing ETL pipelines. Use the STAR method (Situation, Task, Action, Result) to structure your answers, highlighting how you overcame obstacles and delivered successful data solutions.

✨Understand the Business Context

Familiarise yourself with the consultancy's role in digital transformation and how data engineering fits into that picture. Be ready to discuss how your work can align with business goals and improve data quality for high-profile clients.

✨Ask Insightful Questions

Prepare thoughtful questions about the team dynamics, project expectations, and the company’s approach to cloud technology. This shows your genuine interest in the role and helps you assess if it’s the right fit for you.

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>