At a Glance
- Tasks: Design and maintain data pipelines using Databricks and Apache Spark.
- Company: Join a forward-thinking company with a focus on data innovation.
- Benefits: Earn £400-500 per day, fully remote work, and flexible hours.
- Why this job: Make an impact in the data world while working from anywhere.
- Qualifications: 3+ years as a Data Engineer with strong Databricks and Python skills.
- Other info: Exciting opportunity for career growth in a dynamic environment.
The predicted salary is between 80000 - 120000 £ per year.
We are currently recruiting a Data Engineer for one of our clients. The role is outside IR35 and is paying £400-500 per day, it will initially be for 6 months. It is also fully remote.
Key Responsibilities
- Design, develop, and maintain batch and streaming data pipelines using Databricks (Apache Spark)
- Build and optimize ETL/ELT workflows for large-scale structured and unstructured data
- Implement Delta Lake architectures (Bronze/Silver/Gold layers)
- Integrate data from multiple sources (databases, APIs, event streams, files)
- Optimize Spark jobs for performance, scalability, and cost
- Manage data quality, validation, and monitoring
- Collaborate with analytics and ML teams to support reporting and model development
- Implement CI/CD, version control, and automated testing for data pipelines
Required Qualifications
- 3+ years of experience as a Data Engineer
- Strong experience with Databricks and Apache Spark
- Proficiency in Python (required); SQL (advanced)
- Hands-on experience with AWS or Azure cloud services:
- AWS: S3, EMR, Glue, Redshift, Lambda, IAM
- Azure: ADLS Gen2, Azure Databricks, Synapse, Data Factory, Key Vault
Data Engineer in Milton Keynes employer: Searches @ Wenham Carter
Contact Detail:
Searches @ Wenham Carter Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in Milton Keynes
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field and let them know you're on the lookout for opportunities. You never know who might have a lead or can refer you to a hiring manager.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving Databricks and Apache Spark. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your thought process when designing data pipelines or optimising Spark jobs – it’s all about demonstrating your expertise!
✨Tip Number 4
Don’t forget to apply through our website! We’ve got loads of opportunities that match your skills, and applying directly can sometimes give you an edge. Plus, we’re here to support you every step of the way!
We think you need these skills to ace Data Engineer in Milton Keynes
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Databricks, Apache Spark, and any relevant cloud services like AWS or Azure. We want to see how your skills match what we're looking for!
Showcase Your Projects: Include specific projects where you've designed and maintained data pipelines or worked with ETL/ELT workflows. This gives us a clear picture of your hands-on experience and problem-solving skills in action.
Be Clear and Concise: When writing your application, keep it clear and to the point. Use bullet points for key achievements and responsibilities. We appreciate straightforward communication that gets right to the heart of your qualifications.
Apply Through Our Website: Don't forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, we love seeing applications come directly from our site!
How to prepare for a job interview at Searches @ Wenham Carter
✨Know Your Tech Stack
Make sure you’re well-versed in Databricks, Apache Spark, and the cloud services mentioned in the job description. Brush up on your Python and SQL skills, as these will likely come up during technical discussions.
✨Showcase Your Projects
Prepare to discuss specific projects where you've designed and maintained data pipelines. Highlight your experience with ETL/ELT workflows and how you’ve optimised Spark jobs for performance and cost.
✨Understand Delta Lake Architecture
Familiarise yourself with the Bronze/Silver/Gold layers of Delta Lake. Be ready to explain how you’ve implemented these architectures in past roles and the benefits they bring to data management.
✨Collaboration is Key
Since the role involves working with analytics and ML teams, think of examples where you’ve collaborated effectively. Be prepared to discuss how you’ve supported reporting and model development in previous positions.