At a Glance
- Tasks: Design and maintain data pipelines using Databricks and Apache Spark.
- Company: Join a forward-thinking company offering fully remote work.
- Benefits: Earn Β£400-500 per day with flexible hours and a 6-month contract.
- Why this job: Make an impact by optimising data workflows and collaborating with analytics teams.
- Qualifications: 3+ years as a Data Engineer with strong Databricks and Python skills.
- Other info: Great opportunity for career growth in a dynamic, remote environment.
We are currently recruiting a Data Engineer for one of our clients. The role is outside IR35 and is paying Β£400-500 per day, it will initially be for 6 months. It is also fully remote.
Key Responsibilities
- Design, develop, and maintain batch and streaming data pipelines using Databricks (Apache Spark)
- Build and optimize ETL/ELT workflows for large-scale structured and unstructured data
- Implement Delta Lake architectures (Bronze/Silver/Gold layers)
- Integrate data from multiple sources (databases, APIs, event streams, files)
- Optimize Spark jobs for performance, scalability, and cost
- Manage data quality, validation, and monitoring
- Collaborate with analytics and ML teams to support reporting and model development
- Implement CI/CD, version control, and automated testing for data pipelines
Required Qualifications
- 3+ years of experience as a Data Engineer
- Strong experience with Databricks and Apache Spark
- Proficiency in Python (required); SQL (advanced)
- Hands-on experience with AWS or Azure cloud services:
- AWS: S3, EMR, Glue, Redshift, Lambda, IAM
- Azure: ADLS Gen2, Azure Databricks, Synapse, Data Factory, Key Vault
Data Engineer in Bradford employer: Searches @ Wenham Carter
Contact Detail:
Searches @ Wenham Carter Recruiting Team
StudySmarter Expert Advice π€«
We think this is how you could land Data Engineer in Bradford
β¨Tip Number 1
Network like a pro! Reach out to fellow Data Engineers or join relevant online communities. You never know who might have the inside scoop on job openings or can refer you directly.
β¨Tip Number 2
Show off your skills! Create a portfolio showcasing your data pipelines, ETL workflows, and any projects using Databricks or Apache Spark. This will give potential employers a taste of what you can do.
β¨Tip Number 3
Prepare for interviews by brushing up on common Data Engineer questions. Be ready to discuss your experience with AWS or Azure, and how you've tackled challenges in data quality and performance optimisation.
β¨Tip Number 4
Don't forget to apply through our website! Weβve got loads of opportunities that might just be the perfect fit for you. Plus, itβs a great way to get noticed by hiring managers.
We think you need these skills to ace Data Engineer in Bradford
Some tips for your application π«‘
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Databricks, Apache Spark, and any relevant cloud services like AWS or Azure. We want to see how your skills match what we're looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you're passionate about data engineering and how your background makes you a great fit for the role. Keep it concise but engaging β we love a good story!
Showcase Your Projects: If you've worked on any cool projects involving data pipelines or ETL workflows, make sure to mention them! We want to see your hands-on experience and how you've tackled challenges in the past.
Apply Through Our Website: Don't forget to apply through our website! Itβs the easiest way for us to keep track of your application and ensures you donβt miss out on any updates. We canβt wait to hear from you!
How to prepare for a job interview at Searches @ Wenham Carter
β¨Know Your Tech Inside Out
Make sure you brush up on your Databricks and Apache Spark knowledge. Be ready to discuss how you've designed and maintained data pipelines in the past, and have examples of ETL/ELT workflows you've built. The more specific you can be about your experience with these technologies, the better!
β¨Showcase Your Problem-Solving Skills
Prepare to talk about challenges you've faced in previous roles, especially around data quality and performance optimisation. Think of a couple of scenarios where you had to troubleshoot or improve a data pipeline, and explain your thought process and the outcome.
β¨Familiarise Yourself with Cloud Services
Since the role requires experience with AWS or Azure, make sure you know the ins and outs of the services mentioned in the job description. Be ready to discuss how you've used S3, EMR, Glue, or their Azure counterparts in your projects. This will show that you're not just familiar with the tools, but that you can leverage them effectively.
β¨Collaboration is Key
This role involves working with analytics and ML teams, so be prepared to discuss how you've collaborated with others in the past. Share examples of how youβve supported reporting and model development, and highlight your communication skills. Itβs all about showing that you can work well in a team environment!