Data Engineer

Data Engineer

Full-Time 60000 - 80000 ÂŁ / year (est.) Home office (partial)
E

At a Glance

  • Tasks: Design and optimise data systems for blockchain intelligence, transforming complex datasets into actionable insights.
  • Company: Join Elliptic, a leader in blockchain intelligence with a collaborative and innovative culture.
  • Benefits: Enjoy hybrid working, generous leave, health insurance, and a personal development budget.
  • Other info: Embrace a culture of openness, autonomy, and continuous improvement with diverse teams.
  • Why this job: Make a real impact on the future of finance while working with cutting-edge technologies.
  • Qualifications: Experience in data engineering and familiarity with big data frameworks like Spark and Databricks.

The predicted salary is between 60000 - 80000 ÂŁ per year.

About the role

Help shape the future of blockchain intelligence at Elliptic. At Elliptic, we’re building the intelligence layer for the future of finance. Our teams transform complex blockchain and off‑chain data into actionable insight, empowering financial institutions, regulators, and businesses to innovate with confidence. Guided by our mission to make digital‑asset intelligence seamlessly accessible, we design and scale the data streams and services that power Elliptic’s analytics and decisioning products. As a Data Engineer, you’ll design and optimise systems that process large‑scale blockchain and off‑chain datasets, enabling organisations worldwide to make trusted, data‑driven decisions. Whether you join one of our platform‑focused teams or those that work directly on product data, you’ll tackle challenges spanning batch and streaming processing, building high‑quality, scalable solutions for a rapidly evolving ecosystem.

Responsibilities

  • Build and maintain distributed data pipelines using Scala, Spark, and cloud technologies
  • Collaborate with engineers, data scientists, and product teams to deliver reliable, scalable data systems
  • Design and optimise data ingestion and transformation workflows across blockchain and traditional datasets
  • Ensure accuracy, scalability, and efficiency in systems processing hundreds of millions of daily data points
  • Evaluate design options and trade‑offs across performance, scalability, reliability, and cost
  • Contribute to the full lifecycle of data platform development from design and deployment to continuous improvement
  • Strengthen pipeline reliability, observability, and automation through code and tooling improvements
  • Grow your influence and take on greater responsibility as you deepen your understanding of our distributed systems and platform architecture

Technical Environment

Scala | Spark | Databricks | AWS | Airflow | Kubernetes | Terraform | Functional Programming

No Scala experience yet? If you are an experienced data engineer eager to learn, we will support you.

You will fit right in if you:

  • Enjoy writing clean, well‑tested, and efficient code
  • Use data and experimentation to make informed decisions
  • Thrive in a collaborative, open culture where sharing and feedback are part of daily work
  • Are curious about new technologies, including how AI and automation can enhance data engineering
  • Appreciate an environment that values autonomy, mentoring, and personal development
  • Want to take advantage of AI‑driven productivity tools as part of your day‑to‑day engineering

What we are looking for

We welcome engineers at different stages of their careers and will match your level to your experience so you can keep growing and make an impact.

  • Experience delivering and maintaining distributed data pipelines
  • Practical knowledge of Spark, Databricks, or similar big data frameworks
  • Familiarity with cloud infrastructure (AWS, Azure, or GCP)
  • Understanding of data‑architecture trade‑offs, such as scalability, resilience, and observability
  • Interest or experience in functional programming

Don’t tick all the boxes? We’re still interested to hear from you if you think you’d be a good fit.

Bonus Points

  • Experience in streaming processing concepts like delivery semantics, ordering or partitioning
  • Hands‑on work with Infrastructure as Code (Terraform or CloudFormation)
  • Knowledge of container orchestration (Docker, Kubernetes, Helm)
  • An interest in blockchain and cryptocurrency technology, or a desire to learn
  • Experience applying AI or automation within deployed services

Engineering Culture

Our engineering culture is grounded in openness, autonomy, and continuous improvement. We believe great ideas can come from anywhere, so our engineers are encouraged to experiment, ask questions, and learn quickly. We use functional programming for clarity and reliability, and we rely on peer reviews, data‑driven decisions, and open discussions. Collaboration is central: you will work alongside diverse teams who share knowledge freely. Whether you are designing a new data pipeline or improving system performance, you will find an environment that values curiosity, technical excellence, and shared impact over hierarchy.

Be part of the team

If you are excited about building the data backbone that powers Elliptic’s data platform and helps organisations across the world act faster and see further, we would love to hear from you. At Elliptic, we believe the best ideas come from diverse teams. We encourage applications from people of all backgrounds, identities, and experiences. If you are excited about our mission but are not sure you meet all the requirements, please still apply. Apply today and help define the data infrastructure behind the future of blockchain intelligence.

Job Benefits

How we work

  • Hybrid working and the option to work from almost anywhere for up to 90 days per year
  • ÂŁ500 Remote working budget to set up your home office space

Learning & Development

  • $1,000 Learning & Development budget to use on anything (agreed with your manager) that contributes to your growth and development

Vacation / Leave

  • Holidays: 25 days of annual leave + bank holidays
  • An extra day for your birthday
  • Enhanced parental leave: we provide eligible employees, regardless of gender or whether they become a parent by birth or adoption, 16 weeks of fully paid leave.

Benefits

  • Private Health Insurance - we use Vitality!
  • Full access to Spill Mental Health Support
  • Life Assurance: we hope you will never need this - but our cover is for 4 times your salary to your beneficiaries
  • ÂŁ100 Crypto for you!
  • Cycle to Work Scheme

Data Engineer employer: Elliptic

At Elliptic, we pride ourselves on being an exceptional employer that fosters a collaborative and open culture, where innovation thrives and every team member's voice is valued. With a strong focus on personal development, we offer generous learning budgets, hybrid working options, and comprehensive benefits including enhanced parental leave and private health insurance, ensuring our employees can grow both professionally and personally while contributing to the future of blockchain intelligence.
E

Contact Detail:

Elliptic Recruiting Team

StudySmarter Expert Advice 🤫

We think this is how you could land Data Engineer

✨Tip Number 1

Network like a pro! Reach out to current or former employees at Elliptic on LinkedIn. A friendly chat can give you insider info and maybe even a referral, which can really boost your chances.

✨Tip Number 2

Show off your skills in action! If you’ve got a portfolio or GitHub with projects related to data engineering, make sure to highlight them during interviews. It’s a great way to demonstrate your expertise and passion.

✨Tip Number 3

Prepare for those technical interviews! Brush up on your Scala and Spark knowledge, and be ready to tackle some coding challenges. Practising common data engineering problems can help you feel more confident.

✨Tip Number 4

Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, it shows you’re genuinely interested in being part of the Elliptic team.

We think you need these skills to ace Data Engineer

Scala
Spark
Databricks
AWS
Airflow
Kubernetes
Terraform
Functional Programming
Data Pipeline Development
Cloud Infrastructure
Data Architecture
Streaming Processing
Infrastructure as Code
Container Orchestration
AI and Automation

Some tips for your application 🫡

Tailor Your Application: Make sure to customise your CV and cover letter for the Data Engineer role. Highlight your experience with Scala, Spark, and any cloud technologies you've worked with. We want to see how your skills align with our mission at Elliptic!

Show Your Passion for Data: In your application, let us know why you're excited about data engineering and blockchain technology. Share any projects or experiences that demonstrate your enthusiasm and curiosity. We love candidates who are eager to learn and grow!

Be Clear and Concise: When writing your application, keep it straightforward and to the point. Use clear language to describe your achievements and skills. We appreciate well-structured applications that make it easy for us to see your potential.

Apply Through Our Website: We encourage you to submit your application directly through our website. This way, you can ensure it reaches the right people and you'll have access to all the latest updates about the role. Don't miss out on this opportunity!

How to prepare for a job interview at Elliptic

✨Know Your Tech Stack

Familiarise yourself with the technologies mentioned in the job description, like Scala, Spark, and AWS. Be ready to discuss your experience with these tools and how you've used them to build or optimise data pipelines.

✨Showcase Your Problem-Solving Skills

Prepare to discuss specific challenges you've faced in previous roles and how you tackled them. Use examples that highlight your ability to design scalable solutions and make data-driven decisions.

✨Emphasise Collaboration

Since the role involves working closely with engineers and data scientists, be prepared to talk about your experience in collaborative environments. Share examples of how you've contributed to team projects and fostered open communication.

✨Express Your Curiosity

Demonstrate your interest in new technologies, especially AI and automation. Discuss any relevant projects or learning experiences that show your eagerness to grow and adapt in the fast-evolving field of data engineering.

Land your dream job quicker with Premium

You’re marked as a top applicant with our partner companies
Individual CV and cover letter feedback including tailoring to specific job roles
Be among the first applications for new jobs with our AI application
1:1 support and career advice from our career coaches
Go Premium

Money-back if you don't land a job in 6-months

>