At a Glance
- Tasks: Build robust data pipelines and design scalable data architecture to drive business insights.
- Company: Ledgy, a tech-driven equity management platform on a mission to empower European entrepreneurship.
- Benefits: Competitive salary, remote work options, and a multicultural team environment.
- Why this job: Join a dynamic team and make a real impact in the world of data engineering.
- Qualifications: 2-3+ years in data engineering with skills in DBT, SQL, and Python.
- Other info: Exciting opportunity for career growth in a fast-paced, innovative company.
The predicted salary is between 36000 - 60000 ÂŁ per year.
At Ledgy, we’re on a mission to make Europe a powerhouse of entrepreneurship by building a modern, tech‑driven equity management and financial reporting platform for private and public companies. By 2025 we aim to be the leading provider for European IPOs and reporting for share‑based payments. We are a value‑based company focused on humility, transparency, ambition and impact to deliver the best experience for our customers and end users.
As a Data Engineer at Ledgy, your mission is to build robust data pipelines, design scalable data architecture, and collaborate with teams to deliver insights that drive business decisions. Reporting directly into Head of Operations & AI, you’ll play a key role in driving our data engineering strategy.
Responsibilities- Manage and optimize data infrastructure and ETL pipelines using Fivetran, Airbyte, and Google Cloud Platform, ensuring reliable data flow from multiple sources into our analytics ecosystem.
- Develop, test, and maintain DBT models that transform raw data into analytics‑ready datasets following best practices.
- Create and manage LookML models in Looker to enable self‑service analytics for stakeholders across the company.
- Drive continuous improvement of our data engineering practices, tooling, and infrastructure as a key member of the Operations team.
- 2‑3+ years experience building production data pipelines and analytics infrastructure, with DBT, SQL, and Python (Pandas, etc.).
- Experience implementing and managing ETL/ELT tools such as Fivetran or Airbyte.
- Ideally hands‑on experience with GCP (BigQuery).
- Proficiency in Looker, including LookML development.
- Strong plus if you have experience using n8n or similar automation tools.
- Experience with SaaS data sources (HubSpot, Stripe, Vitally, Intercom).
- Familiarity with AI‑powered development tools (Cursor, DBT Copilot) and a strong interest in leveraging cutting‑edge tools to improve workflow.
- Strong problem‑solving skills and ability to debug complex data issues.
- Excellent communication skills with ability to explain technical concepts to non‑technical stakeholders.
Seniority level: Mid‑Senior level
Employment type: Contract
Job function: Information Technology and Engineering
Industries: Construction, Software Development, and IT Services and IT Consulting
Data Engineer employer: Ledgy
Contact Detail:
Ledgy Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer
✨Tip Number 1
Network like a pro! Reach out to current employees at Ledgy on LinkedIn and ask about their experiences. A friendly chat can give you insider info and might even lead to a referral, which can double your chances of landing that interview.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data engineering projects, especially those involving DBT, SQL, and Python. When you apply through our website, include this portfolio to demonstrate your hands-on experience and problem-solving abilities.
✨Tip Number 3
Prepare for the technical interview! Brush up on your knowledge of ETL tools like Fivetran and Airbyte, and be ready to discuss how you've optimised data pipelines in the past. We want to see your thought process and how you tackle complex data issues.
✨Tip Number 4
Don’t forget the soft skills! Practice explaining technical concepts in simple terms. Being able to communicate effectively with non-technical stakeholders is key at Ledgy, so show us you can bridge that gap during your interviews.
We think you need these skills to ace Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV reflects the skills and experiences that match the Data Engineer role. Highlight your experience with DBT, SQL, and Python, and don’t forget to mention any hands-on work with GCP or ETL tools like Fivetran.
Craft a Compelling Cover Letter: Use your cover letter to tell us why you’re passionate about data engineering and how you align with our values of humility, transparency, ambition, and impact. Share specific examples of your past projects that demonstrate your problem-solving skills.
Showcase Your Projects: If you’ve worked on any relevant projects, whether personal or professional, make sure to include them. We love seeing how you’ve built data pipelines or optimised data infrastructure in real-world scenarios.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you don’t miss out on any updates regarding your application status!
How to prepare for a job interview at Ledgy
✨Know Your Tech Stack
Make sure you’re well-versed in the tools mentioned in the job description, like Fivetran, Airbyte, and Google Cloud Platform. Brush up on your DBT, SQL, and Python skills, as these will likely come up during technical discussions.
✨Showcase Your Problem-Solving Skills
Prepare to discuss specific examples where you've debugged complex data issues or optimised data pipelines. Use the STAR method (Situation, Task, Action, Result) to structure your answers and highlight your impact.
✨Communicate Clearly
Since you'll need to explain technical concepts to non-technical stakeholders, practice simplifying your explanations. Think about how you can convey complex ideas in a straightforward way that anyone can understand.
✨Emphasise Continuous Improvement
Ledgy values ambition and impact, so be ready to talk about how you've driven improvements in your previous roles. Share any experiences where you’ve implemented new tools or practices that enhanced data engineering processes.