At a Glance
- Tasks: Join us as a Data Engineer to transform data infrastructure and support CRM integration.
- Company: Work with a dynamic client focused on modernising their data capabilities.
- Benefits: Enjoy flexible working options and the chance to enhance your tech skills.
- Why this job: Be part of a critical project that impacts real-world data solutions and collaboration.
- Qualifications: Experience in data engineering, AWS, and Python is essential; R knowledge is a bonus.
- Other info: This is a contract role with opportunities for hands-on learning and growth.
The predicted salary is between 48000 - 72000 £ per year.
We are partnering with a client undergoing a critical transformation of their data infrastructure and CRM capabilities. They are seeking a hands-on Data Engineer with strong AWS experience to support two key initiatives: finalising a Braze CRM integration and migrating legacy R-based data pipelines to a modern cloud-native stack.
Phase 1: CRM Data Engineering (Month 1)
- Support the CRM team with data engineering requests.
- QA, deploy, and monitor data pipelines that push third-party formatted data into Braze.
- Manage ad hoc CRM data tasks including journey updates and API integrations.
- Work extensively within AWS using Lambda, API Gateway, and Python to maintain and enhance integrations.
Phase 2: Legacy Pipeline Migration (Months 2-3)
- Analyze and understand existing R-based data pipelines created by data scientists.
- Migrate these pipelines into Airflow, dbt, and Terraform workflows.
- Modernize and scale legacy infrastructure running on AWS.
- Collaborate with engineering teams to ensure a smooth transition and system stability.
Languages & Scripting:
- Python (primary scripting language for Lambda functions)
- SQL (BigQuery, Redshift)
- R (not essential but beneficial for interpreting existing scripts)
Cloud & Infrastructure:
- AWS services including Lambda, API Gateway, S3, CloudWatch, Kinesis Firehose
- Terraform for infrastructure as code
Orchestration & Transformation:
- Apache Airflow
- dbt
CRM & Marketing Tools:
- Braze (preferred)
- Familiarity with other CRM/marketing automation tools such as Iterable or Salesforce Marketing Cloud is a plus
Candidate Profile
- Proven commercial experience as a data engineer; industry background is not critical.
- Hands-on, pragmatic, and able to deliver quickly with minimal supervision.
- Strong communicator, able to clearly explain technical decisions and project status.
- Willing to take on essential but sometimes "tedious" tasks without hesitation.
- Practical attitude, especially when working with legacy systems or imperfect code.
- Ideally, experience migrating legacy scripting environments (e.g., R to Python) to modern pipelines.
Data Engineer (Contract) employer: Harnham
Contact Detail:
Harnham Recruiting Team
StudySmarter Expert Advice 🤫
We think this is how you could land Data Engineer (Contract)
✨Tip Number 1
Familiarise yourself with AWS services, especially Lambda and API Gateway, as these are crucial for the role. Consider building a small project that utilises these services to demonstrate your hands-on experience.
✨Tip Number 2
Brush up on your Python skills, particularly in the context of data engineering. You might want to explore libraries like Pandas or NumPy, which can be beneficial when working with data pipelines.
✨Tip Number 3
Gain a solid understanding of Apache Airflow and dbt, as these tools will be essential for migrating legacy pipelines. You could try setting up a simple workflow using Airflow to showcase your ability to manage orchestration.
✨Tip Number 4
If you have any experience with CRM tools like Braze, make sure to highlight it. If not, consider exploring their documentation or tutorials to get a basic understanding of how they work and their integration capabilities.
We think you need these skills to ace Data Engineer (Contract)
Some tips for your application 🫡
Tailor Your CV: Make sure your CV highlights relevant experience in data engineering, particularly with AWS and Python. Emphasise any previous work with CRM systems like Braze or similar tools.
Craft a Compelling Cover Letter: In your cover letter, explain why you're interested in this specific role and how your skills align with the job requirements. Mention your hands-on experience with data pipelines and any relevant projects you've completed.
Showcase Technical Skills: Clearly list your technical skills related to the job description, such as AWS services, Python, SQL, and any experience with Airflow or Terraform. Use specific examples to demonstrate your proficiency.
Highlight Problem-Solving Abilities: Provide examples of how you've tackled challenges in previous roles, especially those involving legacy systems or data migration. This will show your practical attitude and ability to deliver results under minimal supervision.
How to prepare for a job interview at Harnham
✨Showcase Your AWS Expertise
Make sure to highlight your experience with AWS services, especially Lambda and API Gateway. Be prepared to discuss specific projects where you've used these tools to solve problems or improve processes.
✨Demonstrate Your Data Pipeline Knowledge
Be ready to talk about your experience with data pipelines, particularly in migrating legacy systems. Discuss any relevant projects where you've worked with Airflow, dbt, or Terraform, and how you approached the challenges involved.
✨Communicate Clearly
As a strong communicator, you should practice explaining complex technical concepts in simple terms. This will be crucial when discussing your past experiences and how they relate to the role, especially when collaborating with non-technical teams.
✨Embrace the Tedious Tasks
Prepare to discuss your attitude towards essential but sometimes tedious tasks. Share examples of how you've tackled such tasks in the past and why you believe they are important for the success of a project.