At a Glance
- Tasks: Design and deliver enterprise-scale data pipelines using AWS Glue and PySpark.
- Company: Supportive university project focused on major data platform transformation.
- Benefits: Competitive daily rate, remote work, and hands-on experience with cutting-edge technology.
- Why this job: Join a transformative project and enhance your skills in cloud data engineering.
- Qualifications: Experience with AWS services, particularly Glue, S3, and PySpark.
- Other info: Remote role with a duration of approximately 3 months.
I am supporting a university with a major data platform transformation project as they implement AWS across their environment. We are looking for a Data Engineer with strong hands-on experience in designing and delivering enterprise-scale data pipelines using AWS Glue and PySpark. The role will involve building and optimising ETL processes, working with raw and curated datasets, and ensuring data is processed efficiently and to a high standard.
You will be responsible for developing scalable, production-grade data workflows, integrating data from multiple systems, and applying best practices around data modelling, data quality, and automation. Experience working within a modern cloud data stack is essential, along with an understanding of how to structure data for analytics, reporting and downstream consumption.
The ideal candidate will have a solid background in Spark-based engineering, particularly PySpark, and be confident working with Glue jobs, Glue Catalog, S3, and other AWS native services used within a data platform build.
Location: Remote (client based in North East England)
Rate: £500- £600 per day
IR35: Inside IR35, must use an approved umbrella on our list
Duration: approx 3 months
Start date: ASAP
AWS Data Engineer in North East employer: Real Staffing
Contact Detail:
Real Staffing Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AWS Data Engineer in North East
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who have experience with AWS. A friendly chat can lead to insider info about job openings or even referrals.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving AWS Glue and PySpark. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and AWS services. Practise explaining your past projects and how you tackled challenges, focusing on your hands-on experience with ETL processes.
✨Tip Number 4
Don’t forget to apply through our website! We’ve got loads of opportunities that match your skills, and applying directly can sometimes give you an edge. Plus, we’re here to support you every step of the way!
We think you need these skills to ace AWS Data Engineer in North East
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your hands-on experience with AWS Glue and PySpark. We want to see how you've designed and delivered data pipelines, so don’t hold back on those details!
Showcase Your Projects: Include specific examples of ETL processes you've built or optimised. We love seeing real-world applications of your skills, especially if they involve integrating data from multiple systems.
Highlight Your Cloud Experience: Since this role is all about working within a modern cloud data stack, be sure to mention any relevant experience you have with AWS services like S3 and Glue. We’re looking for that cloud-savvy mindset!
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to keep track of your application and ensure it gets the attention it deserves!
How to prepare for a job interview at Real Staffing
✨Know Your AWS Inside Out
Make sure you brush up on your knowledge of AWS services, especially Glue, S3, and how they integrate with data pipelines. Be ready to discuss your hands-on experience with these tools and how you've used them in past projects.
✨Showcase Your PySpark Skills
Prepare to talk about your experience with PySpark in detail. Have examples ready that demonstrate how you've built and optimised ETL processes using PySpark, and be prepared to explain the challenges you faced and how you overcame them.
✨Understand Data Quality and Modelling
Familiarise yourself with best practices around data quality and modelling. Be ready to discuss how you ensure data integrity and quality in your workflows, and how you structure data for analytics and reporting.
✨Ask Insightful Questions
Prepare some thoughtful questions about the company's data transformation project. This shows your interest and helps you understand their needs better. Ask about their current challenges with data integration or how they measure success in their data initiatives.