At a Glance
- Tasks: Build and optimise data pipelines for global brands using cutting-edge tech.
- Company: A growing data insights business based in Manchester with a privacy-first approach.
- Benefits: Hybrid working, competitive salary, and additional compensation for on-call participation.
- Other info: Collaborative environment with opportunities to work on large datasets.
- Why this job: Join a dynamic team and make an impact on data-driven solutions.
- Qualifications: Strong experience in Python, SQL, and AWS; hands-on with Apache Spark.
The predicted salary is between 45000 - 55000 £ per year.
Our client is a growing data and insights business. They are hiring a Data Engineer to join their Manchester-based team. The company works with global brands, providing data-driven insights through a large-scale, privacy-first platform. As they continue to scale, they’re looking for a Data Engineer to play a key role in building and optimising their data pipelines and reporting capabilities.
The Role
You’ll be working on a large and complex data platform, developing and maintaining both real-time and batch ETL pipelines. This is a hands-on role focused on improving data quality, scalability, and performance. You’ll collaborate closely with engineering and product teams, helping translate business requirements into robust, production-ready data solutions.
Key Responsibilities
- Build and maintain real-time and batch ETL pipelines
- Monitor and improve data pipeline performance, architecture, and tooling
- Ensure data accuracy, integrity, and scalability
- Act as a subject matter expert for data pipelines and processes
- Work closely with stakeholders to define and deliver data solutions
- Review code and provide feedback to improve team quality
- Support production systems and troubleshoot issues
- Write clean, tested, and maintainable code
Tech Stack
- Python & SQL
- AWS (EMR, Athena, Lambda)
- Apache Spark (Scala or PySpark)
- Big data / distributed processing tools
What They’re Looking For
- Strong experience as a Data Engineer using Python and SQL
- Experience working with AWS or similar cloud platforms
- Hands-on experience with Apache Spark
- Ability to work with large, complex datasets
- Strong problem-solving and analytical mindset
- Understanding of CI/CD, testing, and agile development practices
- Confident communicating with technical and non-technical stakeholders
- Exposure to AI tools for improving productivity is a bonus
Additional Info
Manchester (Trafford Park) Hybrid working (2 days in office) Participation in an on-call rota (with additional compensation)
Data Engineer in Manchester employer: SearchWorks
Contact Detail:
SearchWorks Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer in Manchester
✨Tip Number 1
Network like a pro! Reach out to folks in the industry on LinkedIn or at local meetups. You never know who might have the inside scoop on job openings or can put in a good word for you.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data projects, especially those involving Python, SQL, and AWS. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge and problem-solving skills. Practice common data engineering scenarios and be ready to discuss how you've tackled challenges in past projects.
✨Tip Number 4
Don’t forget to apply through our website! We’ve got loads of opportunities that might just be the perfect fit for you. Plus, it’s a great way to get noticed by our hiring team.
We think you need these skills to ace Data Engineer in Manchester
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with Python, SQL, and AWS, and don’t forget to mention any hands-on work with Apache Spark. We want to see how your skills match what we’re looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re excited about this role and how your background makes you a perfect fit. Be sure to mention your problem-solving skills and experience with data pipelines.
Showcase Your Projects: If you’ve worked on any relevant projects, make sure to include them in your application. Whether it’s building ETL pipelines or optimising data performance, we love seeing real examples of your work!
Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It’s super easy, and you’ll be able to keep track of your application status. Let’s get your journey started!
How to prepare for a job interview at SearchWorks
✨Know Your Tech Stack
Make sure you’re well-versed in Python, SQL, and AWS. Brush up on your knowledge of Apache Spark too, as it’s a key part of the role. Be ready to discuss how you've used these technologies in past projects.
✨Showcase Your Problem-Solving Skills
Prepare examples that highlight your analytical mindset and problem-solving abilities. Think of specific challenges you faced in previous roles and how you overcame them, especially related to data pipelines and ETL processes.
✨Understand the Business Context
Familiarise yourself with the company’s mission and the types of global brands they work with. Being able to connect your technical skills to their business needs will show that you can translate requirements into effective data solutions.
✨Communicate Effectively
Practice explaining complex technical concepts in simple terms. You’ll need to communicate with both technical and non-technical stakeholders, so being clear and concise is crucial. Prepare to demonstrate this during the interview.