At a Glance
- Tasks: Design and build scalable data pipelines using AWS, Databricks, and PySpark.
- Company: Join a high-impact engineering team focused on innovative data solutions.
- Benefits: Enjoy a hybrid work model with 2 days onsite in London or Glasgow.
- Why this job: Work on cutting-edge technology in a collaborative, forward-thinking environment.
- Qualifications: Experience in cloud environments, AWS services, and advanced PySpark required.
- Other info: Must be available to start in mid-August.
The predicted salary is between 64000 - 72000 £ per year.
Job Description
Senior Data Engineer| AWS/Databricks/PySpark | London/Glasgow (Hybrid) | August Start
Role: Senior Data Engineer
Location: This is a hybrid engagement represented by 2 days/week onsite, either in Central London or Glasgow.
Start Date: Must be able to start mid-August.
Salary: £80k-£90k (Senior) | £90k-£95k (Lead)
About The Role
Our partner is looking for a Senior Data Engineer to join a high-impact engineering team delivering scalable data solutions for complex marketing and customer insight use cases. This is an opportunity to work on cutting-edge data pipelines, cloud-native platforms and real-time data flows in a collaborative, forward-thinking environment.
You’ll be involved in designing and building production-grade ETL pipelines, driving DevOps practices across data systems and contributing to high-availability architectures using tools like Databricks, Spark and Airflow- all within a modern AWS ecosystem.
Responsibilities
- Architect and build scalable, secure data pipelines using AWS, Databricks and PySpark.
- Design and implement robust ETL/ELT solutions for both structured and unstructured data.
- Automate workflows and orchestrate jobs using Airflow and GitHub Actions.
- Integrate data with third-party APIs to support real-time marketing insights.
- Collaborate closely with cross-functional teams including Data Science, Software Engineering and Product.
- Champion best practices in data governance, observability and compliance.
- Contribute to CI/CD pipeline development and infrastructure automation (Terraform, AWS DevOps).
- Provide input into technical decisions, peer reviews and solution design.
Requirements
- Proven experience as a Data Engineer in cloud-first environments.
- Strong commercial knowledge of AWS services (e.g. S3, Glue, Redshift).
- Advanced PySpark and Databricks experience (Delta Lake, Unity Catalog, Databricks Jobs etc).
- Proficient in SQL (T-SQL/SparkSQL) and Python for data transformation and scripting.
- Hands-on experience with workflow orchestration tools such as Airflow.
- Strong version control and DevOps exposure (Git, GitHub Actions, Terraform).
- Familiar with data quality tools and metadata/cataloguing (e.g. Great Expectations, Unity Catalog).
- Beneficial: MarTech domain knowledge.
Notable: This is a hybrid engagement represented by 2 days/week onsite, either in Central London or Glasgow. You must be able to start in August.
Senior Data Engineer| AWS/Databricks/PySpark | London/Glasgow (Hybrid) | August Start
Senior Data Engineer - AWS/Databricks/PySpark - August Start Date employer: WüNDER_TALENT
Contact Detail:
WüNDER_TALENT Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Senior Data Engineer - AWS/Databricks/PySpark - August Start Date
✨Tip Number 1
Familiarise yourself with the latest AWS services and tools mentioned in the job description, such as S3, Glue, and Redshift. Being able to discuss these technologies confidently during your interview will show that you're proactive and knowledgeable.
✨Tip Number 2
Brush up on your PySpark and Databricks skills, especially focusing on Delta Lake and Unity Catalog. Consider working on a small project or contributing to an open-source project to demonstrate your hands-on experience with these tools.
✨Tip Number 3
Network with professionals in the data engineering field, particularly those who work with AWS and Databricks. Attend meetups or webinars to gain insights and potentially get referrals that could help you land the job.
✨Tip Number 4
Prepare to discuss your experience with workflow orchestration tools like Airflow and your exposure to DevOps practices. Be ready to share specific examples of how you've implemented these in past projects to highlight your practical knowledge.
We think you need these skills to ace Senior Data Engineer - AWS/Databricks/PySpark - August Start Date
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in AWS, Databricks, and PySpark. Use specific examples of projects where you've built scalable data pipelines or automated workflows to demonstrate your expertise.
Craft a Compelling Cover Letter: In your cover letter, express your enthusiasm for the role and the company. Mention how your skills align with the responsibilities listed in the job description, particularly your experience with ETL processes and DevOps practices.
Showcase Technical Skills: Clearly outline your technical skills related to the job, such as proficiency in SQL, Python, and workflow orchestration tools like Airflow. Providing concrete examples of how you've used these skills in past roles can strengthen your application.
Highlight Collaboration Experience: Since the role involves working closely with cross-functional teams, include examples of past collaborations. Discuss how you contributed to team projects and any best practices you championed in data governance or compliance.
How to prepare for a job interview at WüNDER_TALENT
✨Showcase Your Technical Skills
Be prepared to discuss your experience with AWS, Databricks, and PySpark in detail. Highlight specific projects where you've designed and built ETL pipelines or automated workflows, as this will demonstrate your hands-on expertise.
✨Understand the Company’s Needs
Research the company’s focus on marketing and customer insights. Be ready to explain how your skills can contribute to their goals, especially in building scalable data solutions that support real-time insights.
✨Prepare for Scenario-Based Questions
Expect questions that assess your problem-solving abilities. Prepare to discuss how you would approach designing a data pipeline or integrating third-party APIs, showcasing your thought process and technical knowledge.
✨Emphasise Collaboration and Best Practices
Since the role involves working closely with cross-functional teams, be sure to highlight your experience in collaborative environments. Discuss how you champion best practices in data governance and compliance, which are crucial for the role.