At a Glance
- Tasks: Design and maintain data pipelines using Databricks and Apache Spark.
- Company: Join a forward-thinking company that values innovation and remote work.
- Benefits: Earn £400-500 per day, fully remote, with flexible hours.
- Why this job: Make an impact by optimising data workflows and collaborating with analytics teams.
- Qualifications: 3+ years as a Data Engineer with strong Databricks and Python skills.
- Other info: Exciting opportunity for career growth in a dynamic, tech-driven environment.
The predicted salary is between 40000 - 50000 £ per year.
We are currently recruiting a Data Engineer for one of our clients. The role is outside IR35 and is paying £400-500 per day, it will initially be for 6 months. It is also fully remote.
Key Responsibilities
- Design, develop, and maintain batch and streaming data pipelines using Databricks (Apache Spark)
- Build and optimize ETL/ELT workflows for large-scale structured and unstructured data
- Implement Delta Lake architectures (Bronze/Silver/Gold layers)
- Integrate data from multiple sources (databases, APIs, event streams, files)
- Optimize Spark jobs for performance, scalability, and cost
- Manage data quality, validation, and monitoring
- Collaborate with analytics and ML teams to support reporting and model development
- Implement CI/CD, version control, and automated testing for data pipelines
Required Qualifications
- 3+ years of experience as a Data Engineer
- Strong experience with Databricks and Apache Spark
- Proficiency in Python (required); SQL (advanced)
- Hands-on experience with AWS or Azure cloud services:
- AWS: S3, EMR, Glue, Redshift, Lambda, IAM
- Azure: ADLS Gen2, Azure Databricks, Synapse, Data Factory, Key Vault
Data Engineer in Watford employer: Searches @ Wenham Carter
Contact Detail:
Searches @ Wenham Carter Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in Watford
✨Tip Number 1
Network like a pro! Reach out to fellow Data Engineers or join relevant online communities. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines, ETL workflows, and any projects using Databricks or Apache Spark. This will give potential employers a taste of what you can do.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge. Be ready to discuss your experience with AWS or Azure services and how you've optimised Spark jobs in the past. Practice makes perfect!
✨Tip Number 4
Don't forget to apply through our website! We’ve got loads of opportunities that match your skills, and applying directly can sometimes give you an edge over other candidates.
We think you need these skills to ace Data Engineer in Watford
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Databricks, Apache Spark, and any relevant cloud services like AWS or Azure. We want to see how your skills match what we're looking for!
Showcase Your Projects: Include specific projects where you've designed and maintained data pipelines or worked with ETL/ELT workflows. This gives us a clear picture of your hands-on experience and problem-solving skills in action.
Be Clear and Concise: When writing your application, keep it clear and to the point. Use bullet points for easy reading and make sure to highlight your key achievements. We appreciate straightforward communication!
Apply Through Our Website: Don't forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. We can’t wait to see what you bring to the table!
How to prepare for a job interview at Searches @ Wenham Carter
✨Know Your Tech Inside Out
Make sure you brush up on your Databricks and Apache Spark knowledge. Be ready to discuss how you've designed and maintained data pipelines in the past, and have examples of ETL/ELT workflows you've built. The more specific you can be about your experience with these technologies, the better!
✨Showcase Your Problem-Solving Skills
Prepare to talk about challenges you've faced in previous roles, especially around data quality and performance optimisation. Think of a couple of scenarios where you had to troubleshoot or improve a data pipeline, and explain your thought process and the outcomes.
✨Familiarise Yourself with Cloud Services
Since the role requires experience with AWS or Azure, make sure you can discuss your hands-on experience with these platforms. Be ready to explain how you've used services like S3, EMR, or Azure Databricks in your projects, and how they contributed to the success of your data engineering tasks.
✨Collaboration is Key
This role involves working closely with analytics and ML teams, so be prepared to discuss how you've collaborated with others in the past. Share examples of how you’ve supported reporting and model development, and highlight your communication skills and teamwork.