At a Glance
- Tasks: Design and build data pipelines on AWS using Glue and Kafka.
- Company: Join a forward-thinking tech company focused on innovation.
- Benefits: Enjoy competitive salary, career growth, and a collaborative environment.
- Why this job: Work with cutting-edge tech and make a real impact in data engineering.
- Qualifications: 10 years of experience in AWS, Kafka, and Python/PySpark.
- Other info: Dynamic team culture with opportunities for professional development.
The predicted salary is between 36000 - 60000 £ per year.
About the role: We’re seeking a hands-on Senior Data Engineer (~10 years’ experience) to deliver production data pipelines on AWS. You’ll design and build streaming (Kafka) and batch pipelines using Glue/EMR (PySpark), implement data contracts and quality gates, and set up CI/CD and observability. You’ve shipped real systems, coached teams, and you document as you go.
Requirements What you’ll do:
- Architect and deliver lake/lakehouse data flows on AWS (S3 + Glue + Glue ETL/EMR).
- Build Kafka consumers/producers, manage schema evolution, resilience, and DLQs.
- Implement PySpark transformations, CDC merges, partitioning and optimization.
- Add quality/observability (tests, monitoring, alerting, lineage basics).
- Harden security (IAM least privilege, KMS, private networking).
- Create runbooks, diagrams, and handover materials.
What you’ll bring:
- Deep AWS (Glue, RDS, S3, EMR, IAM/KMS, CloudWatch).
- Strong Kafka (MSK/Confluent, schema registry, consumer group tuning).
- Python/PySpark in production with tests and CI/CD.
- Data modeling (bronze/silver/gold, CDC, SCD2) and data contracts.
- IaC (Terraform/CDK) and cost/performance tuning experience.
- Clear communication and stakeholder engagement.
Benefits Work on cutting-edge technologies and impactful projects. Opportunities for career growth and development. Collaborative and inclusive work environment. Competitive salary and benefits package.
Senior Data Engineer l AWS Glue & Kafka l in London employer: VE3
Contact Detail:
VE3 Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer l AWS Glue & Kafka l in London
✨Tip Number 1
Network like a pro! Reach out to your connections in the data engineering field, especially those who work with AWS Glue and Kafka. A friendly chat can lead to insider info about job openings that aren't even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your past projects, especially those involving data pipelines and AWS services. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering scenarios. Be ready to discuss how you've tackled challenges with Kafka or implemented CI/CD in your previous roles. We want to see your problem-solving skills in action!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets noticed. Plus, we love seeing candidates who are proactive and engaged with our company.
We think you need these skills to ace Senior Data Engineer l AWS Glue & Kafka l in London
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights your experience with AWS, Kafka, and PySpark. We want to see how you've architected data flows and built pipelines, so don’t hold back on those details!
Showcase Your Projects: Include specific examples of projects where you’ve implemented CI/CD, quality gates, or observability. We love seeing real systems you've shipped, so let us know what you’ve accomplished!
Be Clear and Concise: When writing your cover letter, keep it straightforward. We appreciate clear communication, so make sure to articulate your skills and experiences without fluff.
Apply Through Our Website: Don’t forget to apply through our website! It’s the best way for us to receive your application and ensures you’re considered for this exciting role.
How to prepare for a job interview at VE3
✨Know Your Tech Inside Out
Make sure you’re well-versed in AWS Glue, Kafka, and PySpark. Brush up on your knowledge of data pipelines, CI/CD processes, and observability tools. Being able to discuss your past experiences with these technologies will show that you’re not just familiar but truly skilled.
✨Prepare Real-World Examples
Think of specific projects where you’ve designed and delivered data flows or implemented quality gates. Be ready to explain the challenges you faced and how you overcame them. This will demonstrate your hands-on experience and problem-solving skills.
✨Showcase Your Communication Skills
Since clear communication is key, practice explaining complex technical concepts in simple terms. You might be asked to describe your approach to stakeholder engagement or how you document your work, so have some examples ready.
✨Ask Insightful Questions
Prepare thoughtful questions about the company’s data architecture, team dynamics, or future projects. This shows your genuine interest in the role and helps you assess if the company is the right fit for you.