Data Engineer

Data Engineer

Full-Time 50000 - 60000 € / year (est.) No home office possible
Perch Group

At a Glance

  • Tasks: Build and maintain data pipelines using cutting-edge tools like Azure Data Factory and Databricks.
  • Company: Join Perch Group, a leader in ethical debt resolution through innovative technology.
  • Benefits: Earn £50,000 plus bonuses, enjoy flexible hours, and access professional development opportunities.
  • Other info: Dynamic work environment with a focus on collaboration and continuous improvement.
  • Why this job: Make a real impact in the financial services sector while growing your data engineering skills.
  • Qualifications: 3+ years in T SQL, Azure Data Factory, and Databricks; strong Python skills required.

The predicted salary is between 50000 - 60000 € per year.

Perch Group are searching for a Data Engineer. At Perch Group, our vision is clear: to lead the UK debt purchase and collection industry by harnessing cutting-edge technology to drive ethical, efficient, and data-driven debt resolution. Our annual mission is to empower hundreds of thousands of customers to positively engage with and resolve their outstanding debts. We do this through an empathetic and customer-centric approach that is at the heart of our success.

The Role

We’re looking for a Data Engineer to support the development of our modern data platform and contribute to our transition toward metadata driven and configuration led engineering. This role is ideal for someone with strong fundamentals in data engineering who is ready to grow into more advanced automation focused work. You will build and maintain data pipelines, contribute to reusable engineering patterns, and work alongside senior engineers to uplift automation, data quality, and delivery standards.

What will your day-to-day look like?

  • Engineering Delivery
    • Build, maintain, and enhance data pipelines using Azure Data Factory, Databricks, Python and T SQL.
    • Apply parameterisation, configuration, and reusable components in ADF and Databricks where appropriate.
    • Support the creation of metadata driven patterns by implementing components and contributing to framework evolution.
    • Assist in developing and refining data models and data warehouse layers.
    • Ensure pipelines are performant, reliable, and aligned with engineering and governance standards.
  • Collaboration & Continuous Improvement
    • Work closely with senior engineers, analysts, and architects to understand requirements and deliver robust, well designed solutions.
    • Contribute to code reviews, documentation, and quality improvement initiatives.
    • Participate in DevOps practices such as CI/CD, automated testing, and version control.
  • Operational Support
    • Troubleshoot and resolve pipeline issues, ensuring stability and data accuracy.
    • Monitor pipeline performance and help implement improvements.
    • Support continuous optimisation and lifecycle management.

Does this sound like you?

Essential

  • 3+ years T SQL including joins, transformations, stored procedures, and performance basics
  • 3+ years Azure Data Factory, including pipeline development, parameter usage, and basic templates
  • 3+ years Databricks experience (PySpark, Delta Lake, notebooks)
  • 2+ years Python Experience
  • Experience working with data warehouses and structured modelling concepts
  • Strong understanding of ETL/ELT practices and data centric engineering

Desirable

  • Exposure to metadata driven or config driven engineering practices
  • Basic PySpark optimisation or Delta Lake performance tuning
  • Experience using Azure DevOps (repos, pipelines)
  • Understanding of data governance, data quality tooling, or lineage concepts
  • Experience in regulated environments or financial services

The Application Timeline

  • A first stage call with the internal recruitment team (15 minute call)
  • On site technical test (up to 2 hours)
  • A face-to-face or video call with the hiring manager (45 minutes)

What’s In It For You

£50,000 + up to 20% of your annual salary, paid as a bonus. This role can be based at either our Blackpool or Manchester office. 37.5 hours per week. We offer flexible working hours between our core hours of 8am-6pm, Monday to Friday. The opportunity to complete formal qualifications and learn on the job in a successful, growing organisation. And many more benefits to support your wellbeing and professional development.

Typically, the average successful applicant will be within this timeline for 2-3 weeks. Please note we will close this role once we have enough applications for the next stages therefore you should submit your application asap to avoid any disappointment. If you do not receive a response after 3 weeks of applying, please assume you have been unsuccessful as we may experience a high volume of applications.

We’re an equal opportunity employer. All applicants will be considered for employment without attention to age, ethnicity, religion, sex, sexual orientation, gender identity, family or parental status, national origin, or veteran, neurodiversity or disability status. If you have any questions or suggestions of how we can assist you in your application due to disability or personal reasons, please email.

PLEASE NOTE - As we are a financial services company, we are required to run DBS and Credit Checks on all of our successful candidates. This information MUST be disclosed at the time of your initial screening call should you be invited to interview.

Data Engineer employer: Perch Group

At Perch Group, we pride ourselves on being an excellent employer by fostering a collaborative and innovative work culture that prioritises employee growth and well-being. As a Data Engineer, you will have the opportunity to work with cutting-edge technology in a supportive environment, with flexible working hours and the chance to pursue formal qualifications while contributing to meaningful projects that empower customers in their debt resolution journey.

Perch Group

Contact Detail:

Perch Group Recruiting Team

StudySmarter Expert Advice🤫

We think this is how you could land Data Engineer

Tip Number 1

Get your networking game on! Connect with folks in the industry, especially those at Perch Group. LinkedIn is a goldmine for this – drop them a message, ask about their experiences, and show genuine interest in what they do.

Tip Number 2

Prepare for that technical test like it’s the final exam! Brush up on your T-SQL, Azure Data Factory, and Databricks skills. Practise building data pipelines and get comfy with Python – you want to impress them with your hands-on knowledge.

Tip Number 3

Don’t just wing the interview – come armed with questions! Ask about their data governance practices or how they approach automation. This shows you’re not only interested in the role but also in contributing to their mission.

Tip Number 4

Apply through our website for a smoother process! It’s the best way to ensure your application gets seen by the right people. Plus, you’ll be one step closer to joining a team that’s all about ethical and efficient debt resolution.

We think you need these skills to ace Data Engineer

T SQL
Azure Data Factory
Databricks
Python
Data Warehousing
ETL/ELT Practices
PySpark

Some tips for your application 🫡

Tailor Your CV:Make sure your CV is tailored to the Data Engineer role. Highlight your experience with T SQL, Azure Data Factory, and Databricks. We want to see how your skills align with our needs!

Craft a Compelling Cover Letter:Your cover letter is your chance to shine! Share your passion for data engineering and how you can contribute to our mission at Perch Group. Let us know why you're excited about this opportunity.

Showcase Your Projects:If you've worked on relevant projects, don’t hold back! Include links or descriptions of your work with data pipelines, automation, or any other relevant experience. We love seeing what you've accomplished!

Apply Through Our Website:We encourage you to apply through our website for a smoother process. It helps us keep track of applications and ensures you’re considered for the role. Don’t miss out on this opportunity!

How to prepare for a job interview at Perch Group

Know Your Tech Stack

Make sure you’re well-versed in Azure Data Factory, Databricks, T-SQL, and Python. Brush up on your knowledge of data pipelines and ETL/ELT practices, as these will be crucial in your role. Being able to discuss specific projects where you've used these technologies will really impress the interviewers.

Showcase Your Problem-Solving Skills

Prepare to discuss how you've tackled challenges in previous roles, especially around troubleshooting pipeline issues or optimising performance. Use the STAR method (Situation, Task, Action, Result) to structure your answers and highlight your analytical thinking.

Emphasise Collaboration

Since this role involves working closely with senior engineers and analysts, be ready to talk about your experience in team settings. Share examples of how you’ve contributed to code reviews or participated in DevOps practices like CI/CD. This shows you’re a team player who values collaboration.

Understand the Company’s Vision

Familiarise yourself with Perch Group's mission to lead the debt purchase and collection industry through technology. Be prepared to discuss how your skills can contribute to their goals of ethical and efficient debt resolution. This will demonstrate your genuine interest in the company and its values.