At a Glance
- Tasks: Design and maintain data pipelines using Databricks and Apache Spark.
- Company: Join a forward-thinking company with a focus on data innovation.
- Benefits: Earn Β£400-500 per day, fully remote work, and flexible hours.
- Why this job: Make an impact by optimising data workflows and collaborating with analytics teams.
- Qualifications: 3+ years as a Data Engineer with strong Databricks and Python skills.
- Other info: Exciting opportunity for career growth in a dynamic, remote environment.
We are currently recruiting a Data Engineer for one of our clients. The role is outside IR35 and is paying Β£400-500 per day, it will initially be for 6 months. It is also fully remote.
Key Responsibilities
- Design, develop, and maintain batch and streaming data pipelines using Databricks (Apache Spark)
- Build and optimize ETL/ELT workflows for large-scale structured and unstructured data
- Implement Delta Lake architectures (Bronze/Silver/Gold layers)
- Integrate data from multiple sources (databases, APIs, event streams, files)
- Optimize Spark jobs for performance, scalability, and cost
- Manage data quality, validation, and monitoring
- Collaborate with analytics and ML teams to support reporting and model development
- Implement CI/CD, version control, and automated testing for data pipelines
Required Qualifications
- 3+ years of experience as a Data Engineer
- Strong experience with Databricks and Apache Spark
- Proficiency in Python (required); SQL (advanced)
- Hands-on experience with AWS or Azure cloud services:
- AWS: S3, EMR, Glue, Redshift, Lambda, IAM
- Azure: ADLS Gen2, Azure Databricks, Synapse, Data Factory, Key Vault
Data Engineer in Woking employer: Searches @ Wenham Carter
Contact Detail:
Searches @ Wenham Carter Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Data Engineer in Woking
β¨Tip Number 1
Network like a pro! Reach out to fellow Data Engineers or join relevant online communities. You never know who might have the inside scoop on job openings or can refer you directly.
β¨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines, ETL workflows, and any projects using Databricks or Apache Spark. This will give potential employers a taste of what you can do.
β¨Tip Number 3
Prepare for those interviews! Brush up on your technical knowledge, especially around AWS or Azure services. Be ready to discuss how you've tackled data quality and performance issues in past projects.
β¨Tip Number 4
Apply through our website! Weβve got loads of opportunities that match your skills. Plus, itβs a great way to get noticed by recruiters who are looking for top talent like you.
We think you need these skills to ace Data Engineer in Woking
Some tips for your application π«‘
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Databricks, Apache Spark, and any relevant cloud services like AWS or Azure. We want to see how your skills match what we're looking for!
Showcase Your Projects: Include specific projects where you've designed and maintained data pipelines or worked with ETL/ELT workflows. We love seeing real examples of your work, so donβt hold back on the details!
Be Clear and Concise: When writing your application, keep it clear and to the point. Use bullet points for your skills and experiences to make it easy for us to read. We appreciate a well-structured application!
Apply Through Our Website: Donβt forget to apply through our website! Itβs the best way for us to receive your application and ensures youβre considered for the role. We canβt wait to hear from you!
How to prepare for a job interview at Searches @ Wenham Carter
β¨Know Your Tech Inside Out
Make sure you brush up on your Databricks and Apache Spark knowledge. Be ready to discuss how you've designed and optimised data pipelines in the past, and have specific examples at hand. This will show that youβre not just familiar with the tools, but that you can use them effectively.
β¨Showcase Your Problem-Solving Skills
Prepare to talk about challenges you've faced in previous roles, especially around data quality and performance optimisation. Think of a couple of scenarios where you had to troubleshoot or improve a process, and be ready to explain your thought process and the outcome.
β¨Familiarise Yourself with CI/CD Practices
Since the role involves implementing CI/CD for data pipelines, itβs crucial to understand these concepts well. Be prepared to discuss your experience with version control and automated testing, and how these practices can enhance data engineering workflows.
β¨Collaborate and Communicate
This role requires collaboration with analytics and ML teams, so be ready to highlight your teamwork skills. Share examples of how you've worked with cross-functional teams in the past, and how you ensured effective communication to achieve common goals.