At a Glance
- Tasks: Design and maintain data pipelines for high-volume event data and optimise AWS infrastructure.
- Company: Join a fast-growing UK adtech scale-up with a focus on machine learning.
- Benefits: Competitive salary up to £90,000, hybrid work, and great benefits.
- Other info: Exciting opportunity to work on greenfield projects and scale data solutions.
- Why this job: Take ownership of impactful projects that drive customer success and innovation.
- Qualifications: Strong skills in Python, SQL, and AWS data infrastructure required.
The predicted salary is between 90000 - 90000 £ per year.
This is a great opportunity to join a high-growth adtech scale-up where you can take ownership of large-scale data infrastructure that directly powers machine learning, product intelligence, and customer ROI.
THE COMPANY
This is a UK-based adtech SaaS platform that uses machine learning and real-time behavioural analytics to help brands optimise performance.
THE ROLE
Key responsibilities include:
- Designing, building, and maintaining batch and streaming data ingestion pipelines for high-volume event data
- Improving and optimising the AWS Redshift data warehouse, including modelling, performance, and cost efficiency
- Refactoring poorly structured data into clean, well-governed, ML-friendly datasets
- Building pipelines to support ML workflows, feature stores, A/B testing, and experimentation
- Ingesting and integrating new and under-utilised data sources
- Working on greenfield projects while scaling existing data infrastructure to billions of data points
YOUR SKILLS AND EXPERIENCE
You will bring strong capability in:
- Python and SQL
- AWS data infrastructure (S3, Redshift, Glue, Athena, Kinesis, Lambda)
- End-to-end ownership, from proof-of-concept through to production
Data Engineer (AWS & Kinesis/Kafka) employer: Harnham - Data & Analytics Recruitment
Contact Detail:
Harnham - Data & Analytics Recruitment Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (AWS & Kinesis/Kafka)
✨Tip Number 1
Network like a pro! Reach out to folks in the adtech space, especially those working with AWS and data engineering. A friendly chat can lead to insider info about job openings that aren't even advertised yet.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your projects, especially those involving Python, SQL, and AWS. This gives potential employers a taste of what you can do and sets you apart from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering scenarios. Think about how you'd design data pipelines or optimise data warehouses. We want you to be ready to impress with your problem-solving skills!
✨Tip Number 4
Don't forget to apply through our website! It’s the best way to ensure your application gets seen. Plus, we love seeing candidates who are proactive and engaged with our platform.
We think you need these skills to ace Data Engineer (AWS & Kinesis/Kafka)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with AWS, Kinesis, and Kafka, and don’t forget to showcase any relevant projects that demonstrate your skills in building data pipelines and optimising data infrastructure.
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re passionate about data engineering and how your background aligns with our mission at StudySmarter. Be specific about what excites you about the role and the company.
Showcase Your Projects: If you've worked on any cool projects involving machine learning or data ingestion, make sure to mention them! We love seeing real-world applications of your skills, so include links to your GitHub or any relevant portfolios if you have them.
Apply Through Our Website: We encourage you to apply through our website for the best chance of getting noticed. It helps us keep track of applications and ensures you’re considered for the role. Plus, it’s super easy to do!
How to prepare for a job interview at Harnham - Data & Analytics Recruitment
✨Know Your Tech Stack
Make sure you’re well-versed in the technologies mentioned in the job description, especially AWS services like S3, Redshift, and Kinesis. Brush up on your Python and SQL skills, as you’ll likely be asked to demonstrate your knowledge during the interview.
✨Showcase Your Projects
Prepare to discuss specific projects where you've designed or optimised data pipelines. Be ready to explain your thought process, the challenges you faced, and how you overcame them. This will show your end-to-end ownership experience and problem-solving skills.
✨Understand the Company’s Goals
Research the adtech industry and understand how machine learning and real-time analytics can impact brand performance. Being able to articulate how your role as a Data Engineer contributes to these goals will impress your interviewers.
✨Ask Insightful Questions
Prepare thoughtful questions about the company’s data infrastructure and future projects. This not only shows your interest but also helps you gauge if the company aligns with your career aspirations. Think about asking how they handle scaling data to billions of points or their approach to A/B testing.