At a Glance
- Tasks: Design and build data pipelines using Databricks for a modern cloud analytics platform.
- Company: Join a forward-thinking company focused on data innovation and inclusivity.
- Benefits: Competitive market rate, remote work flexibility, and a supportive team environment.
- Why this job: Make a real impact by transforming data systems and enhancing analytics capabilities.
- Qualifications: Experience with Databricks, data engineering, and strong problem-solving skills.
- Other info: Opportunity for growth in a dynamic, collaborative setting with diverse perspectives.
The predicted salary is between 36000 - 60000 Β£ per year.
Location: Remote (once a month, Derby)
Employment Type: 6-month Contract
Rate: Market Rate (Outside IR35)
About the Role:
I'm looking to speak with an experienced Data Engineer to support a major data platform modernisation, centred around Databricks. You'll play a key role in migrating data from legacy/on-premise systems to a modern cloud-based analytics platform, building scalable data pipelines and robust data models to support reporting and advanced analytics. This role will involve hands-on engineering across the full data lifecycle β from ingestion and transformation through to modelling and optimisation β working closely with analytics, reporting, and business stakeholders.
Key Responsibilities:
- Design, build, and maintain data pipelines using Databricks (Spark / PySpark / SQL).
- Support the migration of on-premise data sources to a cloud-based Databricks platform.
- Develop and optimise data models (e.g. dimensional, star schema, or medallion architecture).
- Implement best practices for Delta Lake, performance tuning, and cost optimisation.
- Identify and resolve data quality, reliability, and performance issues during migration.
- Collaborate with stakeholders to translate business requirements into scalable data solutions.
- Contribute to sprint planning, delivery, and issue tracking using Jira.
- Help establish and promote data engineering standards, governance, and documentation.
Tech Stack & Tools:
- Core Technologies: Databricks, Apache Spark (PySpark / Spark SQL), Delta Lake, SQL
Focus Areas:
- Databricks platform development
- Cloud data migration
- Data modelling & analytics-ready datasets
- Modern data engineering best practices
SPG Resourcing is an equal opportunities employer and is committed to fostering an inclusive workplace which values and benefits from the diversity of the workforce we hire. We offer reasonable accommodation at every stage of the application and interview process.
Data Engineer in Ledston employer: SPG Resourcing
Contact Detail:
SPG Resourcing Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Data Engineer in Ledston
β¨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who work with Databricks. A friendly chat can lead to insider info about job openings that might not even be advertised yet.
β¨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines and models. Use GitHub or a personal website to display your projects, especially any work with Databricks, Spark, or SQL. This gives potential employers a taste of what you can do.
β¨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions. Be ready to discuss your experience with cloud migrations and data modelling techniques. Practising your answers will help you feel more confident when itβs time to shine.
β¨Tip Number 4
Donβt forget to apply through our website! Weβve got loads of opportunities waiting for talented Data Engineers like you. Plus, applying directly can sometimes give you an edge over other candidates.
We think you need these skills to ace Data Engineer in Ledston
Some tips for your application π«‘
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Databricks, data pipelines, and any relevant projects you've worked on. We want to see how your skills match what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how you can contribute to our team. Keep it concise but engaging β we love a good story!
Showcase Your Technical Skills: Donβt forget to mention your technical skills in your application. If youβve worked with Apache Spark, SQL, or Delta Lake, let us know! Weβre keen to see how you can bring your expertise to our data platform modernisation.
Apply Through Our Website: We encourage you to apply through our website for a smoother process. It helps us keep track of applications and ensures you donβt miss out on any important updates. Plus, itβs super easy!
How to prepare for a job interview at SPG Resourcing
β¨Know Your Tech Stack
Make sure youβre well-versed in Databricks, Apache Spark, and SQL. Brush up on your knowledge of Delta Lake and data modelling techniques like star schema or medallion architecture. Being able to discuss these technologies confidently will show that you're ready to hit the ground running.
β¨Showcase Your Problem-Solving Skills
Prepare examples of how you've tackled data quality and performance issues in past projects. Be ready to explain your thought process and the steps you took to resolve these challenges. This will demonstrate your hands-on experience and ability to think critically under pressure.
β¨Understand the Business Context
Familiarise yourself with how data engineering supports business goals. Be prepared to discuss how you can translate business requirements into scalable data solutions. This shows that youβre not just a techie but also understand the bigger picture and can collaborate effectively with stakeholders.
β¨Get Comfortable with Agile Methodologies
Since the role involves sprint planning and issue tracking using Jira, itβs a good idea to brush up on Agile principles. Be ready to discuss your experience working in Agile environments and how youβve contributed to team dynamics and project delivery.