At a Glance
- Tasks: Design and build data pipelines on AWS using Glue and Kafka.
- Company: VE3, a leading tech consultancy transforming businesses with innovative solutions.
- Benefits: Competitive salary, career growth opportunities, and a collaborative work environment.
- Why this job: Join us to work on cutting-edge tech and make a real impact in data engineering.
- Qualifications: 10 years of experience in data engineering with strong Python/PySpark skills.
- Other info: Dynamic team culture focused on innovation and quality.
The predicted salary is between 43200 - 72000 £ per year.
London, United Kingdom | Posted on 09/01/2026
VE3 is a technology and business consultancy focused on delivering end-to-end technology solutions and products. We have successfully serviced enterprises across multiple markets, including the public and private sectors. Our services span all aspects of business, providing a holistic approach to managing an organization. We are committed to providing technical innovations and tools that empower organizations with critical information to facilitate decision-making that results in business transformation through cost savings and increased operational efficiency. Our commitment to quality is adopted throughout the organization and sets the foundation for delivering our full suite of capabilities.
About the role:
We’re seeking a hands-on Senior Data Engineer (~10 years’ experience) to deliver production data pipelines on AWS. You’ll design and build streaming (Kafka) and batch pipelines using Glue/EMR (PySpark), implement data contracts and quality gates, and set up CI/CD and observability. You’ve shipped real systems, coached teams, and you document as you go.
Requirements
What you’ll do:
- Architect and deliver lake/lakehouse data flows on AWS (S3 + Glue + Glue ETL/EMR).
- Build Kafka consumers/producers, manage schema evolution, resilience, and DLQs.
- Implement PySpark transformations, CDC merges, partitioning and optimization.
- Add quality/observability (tests, monitoring, alerting, lineage basics).
- Harden security (IAM least privilege, KMS, private networking).
- Create runbooks, diagrams, and handover materials.
What you’ll bring:
- Python/PySpark in production with tests and CI/CD.
- Data modeling (bronze/silver/gold, CDC, SCD2) and data contracts.
- IaC (Terraform/CDK) and cost/performance tuning experience.
- Clear communication and stakeholder engagement.
Work on cutting-edge technologies and impactful projects. Opportunities for career growth and development. Collaborative and inclusive work environment. Competitive salary and benefits package.
Senior Data Engineer l AWS Glue & Kafka l employer: Data Controller, VE Ltd
Contact Detail:
Data Controller, VE Ltd Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer l AWS Glue & Kafka l
✨Tip Number 1
Network like a pro! Reach out to folks in your industry on LinkedIn or at meetups. We all know that sometimes it’s not just what you know, but who you know that can get you in the door.
✨Tip Number 2
Prepare for those interviews by practising common questions and scenarios related to AWS Glue and Kafka. We recommend doing mock interviews with friends or using online platforms to boost your confidence.
✨Tip Number 3
Showcase your projects! If you’ve built any data pipelines or worked with PySpark, make sure to have them ready to discuss. We love seeing real-world applications of your skills during interviews.
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we’re always looking for passionate candidates like you!
We think you need these skills to ace Senior Data Engineer l AWS Glue & Kafka l
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Senior Data Engineer role. Highlight your experience with AWS, Kafka, and PySpark, and don’t forget to showcase any relevant projects you've worked on. We want to see how your skills align with what we’re looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to tell us why you’re passionate about data engineering and how your experience makes you a perfect fit for our team. Keep it concise but engaging – we love a good story!
Showcase Your Projects: If you’ve built any data pipelines or worked on relevant projects, make sure to mention them in your application. We’re keen to see real-world examples of your work, especially those that demonstrate your ability to deliver production data pipelines.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way to ensure your application gets into the right hands. Plus, it shows us you’re serious about joining our team at StudySmarter!
How to prepare for a job interview at Data Controller, VE Ltd
✨Know Your Tech Inside Out
Make sure you’re well-versed in AWS Glue, Kafka, and PySpark. Brush up on your knowledge of data pipelines, CI/CD processes, and observability tools. Being able to discuss your past experiences with these technologies will show that you’re not just familiar but truly capable.
✨Prepare Real-World Examples
Think of specific projects where you’ve designed and built data flows or implemented quality gates. Be ready to explain the challenges you faced and how you overcame them. This will demonstrate your hands-on experience and problem-solving skills.
✨Showcase Your Communication Skills
Since clear communication is key, practice explaining complex technical concepts in simple terms. You might be asked to engage with stakeholders, so being able to articulate your ideas clearly will set you apart from other candidates.
✨Ask Insightful Questions
Prepare thoughtful questions about the company’s data strategy, team dynamics, and future projects. This shows your genuine interest in the role and helps you assess if the company is the right fit for you.