At a Glance
- Tasks: Build and enhance data pipelines, supporting analytics and AI initiatives.
- Company: Rapidly growing tech scale-up with a focus on data innovation.
- Benefits: 25 days annual leave, stock options, remote work, and career development.
- Why this job: Join a dynamic team and shape the future of data-driven decision-making.
- Qualifications: Strong SQL and Python skills, experience with AWS Redshift and dbt.
- Other info: Enjoy regular team socials and 24/7 wellbeing support.
The predicted salary is between 48000 - 72000 £ per year.
An exciting, high-growth technology scale-up is seeking a Data Engineer to help evolve and strengthen its modern data foundations. This organisation builds a global, API-first platform used by major enterprise brands to improve data quality, automate workflows, and enhance the reliability of their marketing and operational data. This is a fantastic opportunity to join a business experiencing rapid expansion, where data is central to product innovation, analytics, machine learning, and AI initiatives. You will work with modern tooling, collaborate closely with Engineering and DevOps, and help shape a robust, scalable data ecosystem.
The Role
As a Data Engineer, you will play a key role in building and improving data pipelines, increasing access to platform and event data, and developing reliable, reusable datasets that support analytics and data-driven decision-making. You’ll take ownership of evolving the company's AWS-based data infrastructure and contribute directly to high-impact initiatives across analytics, ML, and AI.
Key Responsibilities:
- Build, maintain and enhance data pipelines from operational systems into analytics platforms
- Partner with Engineering & DevOps to support ingestion, replication and platform data flows
- Define data storage strategy and manage data in AWS S3
- Build analytics-ready datasets in AWS Redshift, using dbt for transformation
- Improve data quality, monitoring and documentation across the data stack
- Support ML/AI initiatives with clean, well-structured datasets
- Ensure data practices align with GDPR, security and compliance standards
Experience Required
- Strong SQL skills and experience working with AWS Redshift
- Strong Python experience for production data pipelines
- Hands-on experience with dbt for modelling and transformations
- Familiarity with AWS services
- Comfortable partnering with Engineering and DevOps on data workflows
- Understanding of version control and CI/CD best practices
Benefits:
- 25 days' annual leave and your birthday off
- Stock options eligibility
- Remote working
- Career development opportunities in a scaling tech environment
- Employee Assistance Programme – 24/7 wellbeing support
- Regular team socials and events
- Pension plan
AWS Data Engineer employer: Jefferson Frank
Contact Detail:
Jefferson Frank Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land AWS Data Engineer
✨Tip Number 1
Network like a pro! Reach out to people in the industry, attend meetups, and connect with potential colleagues on LinkedIn. You never know who might have the inside scoop on job openings or can refer you directly.
✨Tip Number 2
Show off your skills! Create a portfolio showcasing your data projects, especially those involving AWS, SQL, and Python. This gives you a chance to demonstrate your expertise and makes you stand out from the crowd.
✨Tip Number 3
Prepare for interviews by brushing up on common data engineering questions and scenarios. Practice explaining your past projects and how they relate to the role you're applying for. Confidence is key!
✨Tip Number 4
Don’t forget to apply through our website! It’s the best way to ensure your application gets seen by the right people. Plus, we love seeing candidates who are proactive about their job search.
We think you need these skills to ace AWS Data Engineer
Some tips for your application 🫡
Tailor Your CV: Make sure your CV is tailored to the Data Engineer role. Highlight your experience with AWS, SQL, and Python, and don’t forget to mention any hands-on work with dbt. We want to see how your skills align with what we’re looking for!
Craft a Compelling Cover Letter: Your cover letter is your chance to shine! Use it to explain why you’re excited about this opportunity at StudySmarter and how your background makes you a perfect fit. Keep it engaging and personal – we love to see your personality come through!
Showcase Relevant Projects: If you’ve worked on any projects that relate to data pipelines, analytics, or machine learning, make sure to include them in your application. We’re keen to see real examples of your work and how you’ve tackled challenges in the past.
Apply Through Our Website: We encourage you to apply directly through our website. It’s the best way for us to receive your application and ensures you’re considered for the role. Plus, it shows you’re serious about joining our team!
How to prepare for a job interview at Jefferson Frank
✨Know Your AWS Inside Out
Make sure you brush up on your AWS knowledge, especially around services like S3 and Redshift. Be ready to discuss how you've used these tools in past projects, as well as any challenges you faced and how you overcame them.
✨Show Off Your SQL Skills
Prepare to demonstrate your SQL prowess during the interview. You might be asked to solve a problem or optimise a query on the spot, so practice common SQL scenarios and be ready to explain your thought process.
✨Talk About Data Pipelines
Be prepared to discuss your experience with building and maintaining data pipelines. Share specific examples of how you've improved data quality or streamlined workflows, and highlight any collaboration with Engineering and DevOps teams.
✨Understand GDPR and Compliance
Since data practices must align with GDPR and compliance standards, make sure you can articulate your understanding of these regulations. Discuss how you've implemented data governance in previous roles to ensure data security and compliance.