At a Glance
- Tasks: Design and deliver enterprise-scale data pipelines using AWS Glue and PySpark.
- Company: Supportive university transforming their data platform with AWS.
- Benefits: Competitive daily rate, remote work, and a chance to work on impactful projects.
- Why this job: Join a cutting-edge project and enhance your skills in cloud data engineering.
- Qualifications: Experience with Spark, PySpark, and AWS services is essential.
- Other info: Remote role with excellent opportunities for professional growth.
The predicted salary is between 36000 - 60000 £ per year.
Overview: I am supporting a university with a major data platform transformation project as they implement AWS across their environment. We are looking for a Data Engineer with strong hands-on experience in designing and delivering enterprise-scale data pipelines using AWS Glue and PySpark. The role will involve building and optimising ETL processes, working with raw and curated datasets, and ensuring data is processed efficiently and to a high standard.
Responsibilities: You will be responsible for developing scalable, production-grade data workflows, integrating data from multiple systems, and applying best practices around data modelling, data quality, and automation. Experience working within a modern cloud data stack is essential, along with an understanding of how to structure data for analytics, reporting and downstream consumption.
Desired Skills and Experience: The ideal candidate will have a solid background in Spark-based engineering, particularly PySpark, and be confident working with Glue jobs, Glue Catalog, S3, and other AWS native services used within a data platform build. Proven ability to build and optimise ETL processes, integrate multiple data sources, and uphold data quality and modelling best practices in a cloud environment.
Location and Terms: Location: Remote (client based in North East England) Rate: £500-£600 per day IR35: Inside IR35, must use an approved umbrella on our list Duration: approx 3 months Start date: ASAP
AWS Data Engineer in Lincoln employer: Real
Contact Detail:
Real Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AWS Data Engineer in Lincoln
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who have experience with AWS. A friendly chat can lead to insider info about job openings or even referrals.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving AWS Glue and PySpark. This will give potential employers a taste of what you can do and set you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on your technical knowledge. Be ready to discuss your experience with ETL processes and data modelling. Practising common interview questions can help you feel more confident when it’s your turn to shine.
✨Tip Number 4
Don’t forget to apply through our website! We’ve got loads of opportunities that might be perfect for you. Plus, applying directly can sometimes give you an edge over other candidates.
We think you need these skills to ace AWS Data Engineer in Lincoln
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with AWS Glue, PySpark, and ETL processes. We want to see how your skills match the job description, so don’t be shy about showcasing relevant projects!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re the perfect fit for this Data Engineer role. Share specific examples of your work with data pipelines and cloud environments to grab our attention.
Showcase Your Technical Skills: Don’t forget to mention your hands-on experience with AWS services like Glue, S3, and Spark. We love seeing candidates who can demonstrate their technical prowess, so include any relevant certifications or projects you've completed.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it makes the process smoother for everyone involved!
How to prepare for a job interview at Real
✨Know Your AWS Inside Out
Make sure you brush up on your knowledge of AWS services, especially Glue and S3. Be ready to discuss how you've used these tools in past projects, as well as any challenges you faced and how you overcame them.
✨Showcase Your ETL Expertise
Prepare to talk about your experience with building and optimising ETL processes. Have specific examples ready that demonstrate your ability to integrate multiple data sources and maintain data quality. This will show that you can handle the responsibilities of the role.
✨Demonstrate Your Data Modelling Skills
Be ready to discuss best practices around data modelling and how you've applied them in previous roles. Highlight any experience you have structuring data for analytics and reporting, as this is crucial for the position.
✨Ask Insightful Questions
Prepare thoughtful questions about the company's data platform transformation project. This shows your genuine interest in the role and helps you understand how you can contribute effectively. Plus, it gives you a chance to assess if the company is the right fit for you.